##// END OF EJS Templates
only print a warning when no username is specified...
Benoit Boissinot -
r3721:98f2507c default
parent child Browse files
Show More
@@ -1,513 +1,512 b''
1 HGRC(5)
1 HGRC(5)
2 =======
2 =======
3 Bryan O'Sullivan <bos@serpentine.com>
3 Bryan O'Sullivan <bos@serpentine.com>
4
4
5 NAME
5 NAME
6 ----
6 ----
7 hgrc - configuration files for Mercurial
7 hgrc - configuration files for Mercurial
8
8
9 SYNOPSIS
9 SYNOPSIS
10 --------
10 --------
11
11
12 The Mercurial system uses a set of configuration files to control
12 The Mercurial system uses a set of configuration files to control
13 aspects of its behaviour.
13 aspects of its behaviour.
14
14
15 FILES
15 FILES
16 -----
16 -----
17
17
18 Mercurial reads configuration data from several files, if they exist.
18 Mercurial reads configuration data from several files, if they exist.
19 The names of these files depend on the system on which Mercurial is
19 The names of these files depend on the system on which Mercurial is
20 installed.
20 installed.
21
21
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
24 Per-installation configuration files, searched for in the
24 Per-installation configuration files, searched for in the
25 directory where Mercurial is installed. For example, if installed
25 directory where Mercurial is installed. For example, if installed
26 in /shared/tools, Mercurial will look in
26 in /shared/tools, Mercurial will look in
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 all Mercurial commands executed by any user in any directory.
28 all Mercurial commands executed by any user in any directory.
29
29
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 (Unix) /etc/mercurial/hgrc::
31 (Unix) /etc/mercurial/hgrc::
32 (Windows) C:\Mercurial\Mercurial.ini::
32 (Windows) C:\Mercurial\Mercurial.ini::
33 Per-system configuration files, for the system on which Mercurial
33 Per-system configuration files, for the system on which Mercurial
34 is running. Options in these files apply to all Mercurial
34 is running. Options in these files apply to all Mercurial
35 commands executed by any user in any directory. Options in these
35 commands executed by any user in any directory. Options in these
36 files override per-installation options.
36 files override per-installation options.
37
37
38 (Unix) $HOME/.hgrc::
38 (Unix) $HOME/.hgrc::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
40 (Windows) $HOME\Mercurial.ini::
40 (Windows) $HOME\Mercurial.ini::
41 Per-user configuration file, for the user running Mercurial.
41 Per-user configuration file, for the user running Mercurial.
42 Options in this file apply to all Mercurial commands executed by
42 Options in this file apply to all Mercurial commands executed by
43 any user in any directory. Options in this file override
43 any user in any directory. Options in this file override
44 per-installation and per-system options.
44 per-installation and per-system options.
45 On Windows system, one of these is chosen exclusively according
45 On Windows system, one of these is chosen exclusively according
46 to definition of HOME environment variable.
46 to definition of HOME environment variable.
47
47
48 (Unix, Windows) <repo>/.hg/hgrc::
48 (Unix, Windows) <repo>/.hg/hgrc::
49 Per-repository configuration options that only apply in a
49 Per-repository configuration options that only apply in a
50 particular repository. This file is not version-controlled, and
50 particular repository. This file is not version-controlled, and
51 will not get transferred during a "clone" operation. Options in
51 will not get transferred during a "clone" operation. Options in
52 this file override options in all other configuration files.
52 this file override options in all other configuration files.
53 On Unix, most of this file will be ignored if it doesn't belong
53 On Unix, most of this file will be ignored if it doesn't belong
54 to a trusted user or to a trusted group. See the documentation
54 to a trusted user or to a trusted group. See the documentation
55 for the trusted section below for more details.
55 for the trusted section below for more details.
56
56
57 SYNTAX
57 SYNTAX
58 ------
58 ------
59
59
60 A configuration file consists of sections, led by a "[section]" header
60 A configuration file consists of sections, led by a "[section]" header
61 and followed by "name: value" entries; "name=value" is also accepted.
61 and followed by "name: value" entries; "name=value" is also accepted.
62
62
63 [spam]
63 [spam]
64 eggs=ham
64 eggs=ham
65 green=
65 green=
66 eggs
66 eggs
67
67
68 Each line contains one entry. If the lines that follow are indented,
68 Each line contains one entry. If the lines that follow are indented,
69 they are treated as continuations of that entry.
69 they are treated as continuations of that entry.
70
70
71 Leading whitespace is removed from values. Empty lines are skipped.
71 Leading whitespace is removed from values. Empty lines are skipped.
72
72
73 The optional values can contain format strings which refer to other
73 The optional values can contain format strings which refer to other
74 values in the same section, or values in a special DEFAULT section.
74 values in the same section, or values in a special DEFAULT section.
75
75
76 Lines beginning with "#" or ";" are ignored and may be used to provide
76 Lines beginning with "#" or ";" are ignored and may be used to provide
77 comments.
77 comments.
78
78
79 SECTIONS
79 SECTIONS
80 --------
80 --------
81
81
82 This section describes the different sections that may appear in a
82 This section describes the different sections that may appear in a
83 Mercurial "hgrc" file, the purpose of each section, its possible
83 Mercurial "hgrc" file, the purpose of each section, its possible
84 keys, and their possible values.
84 keys, and their possible values.
85
85
86 decode/encode::
86 decode/encode::
87 Filters for transforming files on checkout/checkin. This would
87 Filters for transforming files on checkout/checkin. This would
88 typically be used for newline processing or other
88 typically be used for newline processing or other
89 localization/canonicalization of files.
89 localization/canonicalization of files.
90
90
91 Filters consist of a filter pattern followed by a filter command.
91 Filters consist of a filter pattern followed by a filter command.
92 Filter patterns are globs by default, rooted at the repository
92 Filter patterns are globs by default, rooted at the repository
93 root. For example, to match any file ending in ".txt" in the root
93 root. For example, to match any file ending in ".txt" in the root
94 directory only, use the pattern "*.txt". To match any file ending
94 directory only, use the pattern "*.txt". To match any file ending
95 in ".c" anywhere in the repository, use the pattern "**.c".
95 in ".c" anywhere in the repository, use the pattern "**.c".
96
96
97 The filter command can start with a specifier, either "pipe:" or
97 The filter command can start with a specifier, either "pipe:" or
98 "tempfile:". If no specifier is given, "pipe:" is used by default.
98 "tempfile:". If no specifier is given, "pipe:" is used by default.
99
99
100 A "pipe:" command must accept data on stdin and return the
100 A "pipe:" command must accept data on stdin and return the
101 transformed data on stdout.
101 transformed data on stdout.
102
102
103 Pipe example:
103 Pipe example:
104
104
105 [encode]
105 [encode]
106 # uncompress gzip files on checkin to improve delta compression
106 # uncompress gzip files on checkin to improve delta compression
107 # note: not necessarily a good idea, just an example
107 # note: not necessarily a good idea, just an example
108 *.gz = pipe: gunzip
108 *.gz = pipe: gunzip
109
109
110 [decode]
110 [decode]
111 # recompress gzip files when writing them to the working dir (we
111 # recompress gzip files when writing them to the working dir (we
112 # can safely omit "pipe:", because it's the default)
112 # can safely omit "pipe:", because it's the default)
113 *.gz = gzip
113 *.gz = gzip
114
114
115 A "tempfile:" command is a template. The string INFILE is replaced
115 A "tempfile:" command is a template. The string INFILE is replaced
116 with the name of a temporary file that contains the data to be
116 with the name of a temporary file that contains the data to be
117 filtered by the command. The string OUTFILE is replaced with the
117 filtered by the command. The string OUTFILE is replaced with the
118 name of an empty temporary file, where the filtered data must be
118 name of an empty temporary file, where the filtered data must be
119 written by the command.
119 written by the command.
120
120
121 NOTE: the tempfile mechanism is recommended for Windows systems,
121 NOTE: the tempfile mechanism is recommended for Windows systems,
122 where the standard shell I/O redirection operators often have
122 where the standard shell I/O redirection operators often have
123 strange effects. In particular, if you are doing line ending
123 strange effects. In particular, if you are doing line ending
124 conversion on Windows using the popular dos2unix and unix2dos
124 conversion on Windows using the popular dos2unix and unix2dos
125 programs, you *must* use the tempfile mechanism, as using pipes will
125 programs, you *must* use the tempfile mechanism, as using pipes will
126 corrupt the contents of your files.
126 corrupt the contents of your files.
127
127
128 Tempfile example:
128 Tempfile example:
129
129
130 [encode]
130 [encode]
131 # convert files to unix line ending conventions on checkin
131 # convert files to unix line ending conventions on checkin
132 **.txt = tempfile: dos2unix -n INFILE OUTFILE
132 **.txt = tempfile: dos2unix -n INFILE OUTFILE
133
133
134 [decode]
134 [decode]
135 # convert files to windows line ending conventions when writing
135 # convert files to windows line ending conventions when writing
136 # them to the working dir
136 # them to the working dir
137 **.txt = tempfile: unix2dos -n INFILE OUTFILE
137 **.txt = tempfile: unix2dos -n INFILE OUTFILE
138
138
139 defaults::
139 defaults::
140 Use the [defaults] section to define command defaults, i.e. the
140 Use the [defaults] section to define command defaults, i.e. the
141 default options/arguments to pass to the specified commands.
141 default options/arguments to pass to the specified commands.
142
142
143 The following example makes 'hg log' run in verbose mode, and
143 The following example makes 'hg log' run in verbose mode, and
144 'hg status' show only the modified files, by default.
144 'hg status' show only the modified files, by default.
145
145
146 [defaults]
146 [defaults]
147 log = -v
147 log = -v
148 status = -m
148 status = -m
149
149
150 The actual commands, instead of their aliases, must be used when
150 The actual commands, instead of their aliases, must be used when
151 defining command defaults. The command defaults will also be
151 defining command defaults. The command defaults will also be
152 applied to the aliases of the commands defined.
152 applied to the aliases of the commands defined.
153
153
154 email::
154 email::
155 Settings for extensions that send email messages.
155 Settings for extensions that send email messages.
156 from;;
156 from;;
157 Optional. Email address to use in "From" header and SMTP envelope
157 Optional. Email address to use in "From" header and SMTP envelope
158 of outgoing messages.
158 of outgoing messages.
159 to;;
159 to;;
160 Optional. Comma-separated list of recipients' email addresses.
160 Optional. Comma-separated list of recipients' email addresses.
161 cc;;
161 cc;;
162 Optional. Comma-separated list of carbon copy recipients'
162 Optional. Comma-separated list of carbon copy recipients'
163 email addresses.
163 email addresses.
164 bcc;;
164 bcc;;
165 Optional. Comma-separated list of blind carbon copy
165 Optional. Comma-separated list of blind carbon copy
166 recipients' email addresses. Cannot be set interactively.
166 recipients' email addresses. Cannot be set interactively.
167 method;;
167 method;;
168 Optional. Method to use to send email messages. If value is
168 Optional. Method to use to send email messages. If value is
169 "smtp" (default), use SMTP (see section "[smtp]" for
169 "smtp" (default), use SMTP (see section "[smtp]" for
170 configuration). Otherwise, use as name of program to run that
170 configuration). Otherwise, use as name of program to run that
171 acts like sendmail (takes "-f" option for sender, list of
171 acts like sendmail (takes "-f" option for sender, list of
172 recipients on command line, message on stdin). Normally, setting
172 recipients on command line, message on stdin). Normally, setting
173 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
173 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
174 sendmail to send messages.
174 sendmail to send messages.
175
175
176 Email example:
176 Email example:
177
177
178 [email]
178 [email]
179 from = Joseph User <joe.user@example.com>
179 from = Joseph User <joe.user@example.com>
180 method = /usr/sbin/sendmail
180 method = /usr/sbin/sendmail
181
181
182 extensions::
182 extensions::
183 Mercurial has an extension mechanism for adding new features. To
183 Mercurial has an extension mechanism for adding new features. To
184 enable an extension, create an entry for it in this section.
184 enable an extension, create an entry for it in this section.
185
185
186 If you know that the extension is already in Python's search path,
186 If you know that the extension is already in Python's search path,
187 you can give the name of the module, followed by "=", with nothing
187 you can give the name of the module, followed by "=", with nothing
188 after the "=".
188 after the "=".
189
189
190 Otherwise, give a name that you choose, followed by "=", followed by
190 Otherwise, give a name that you choose, followed by "=", followed by
191 the path to the ".py" file (including the file name extension) that
191 the path to the ".py" file (including the file name extension) that
192 defines the extension.
192 defines the extension.
193
193
194 Example for ~/.hgrc:
194 Example for ~/.hgrc:
195
195
196 [extensions]
196 [extensions]
197 # (the mq extension will get loaded from mercurial's path)
197 # (the mq extension will get loaded from mercurial's path)
198 hgext.mq =
198 hgext.mq =
199 # (this extension will get loaded from the file specified)
199 # (this extension will get loaded from the file specified)
200 myfeature = ~/.hgext/myfeature.py
200 myfeature = ~/.hgext/myfeature.py
201
201
202 hooks::
202 hooks::
203 Commands or Python functions that get automatically executed by
203 Commands or Python functions that get automatically executed by
204 various actions such as starting or finishing a commit. Multiple
204 various actions such as starting or finishing a commit. Multiple
205 hooks can be run for the same action by appending a suffix to the
205 hooks can be run for the same action by appending a suffix to the
206 action. Overriding a site-wide hook can be done by changing its
206 action. Overriding a site-wide hook can be done by changing its
207 value or setting it to an empty string.
207 value or setting it to an empty string.
208
208
209 Example .hg/hgrc:
209 Example .hg/hgrc:
210
210
211 [hooks]
211 [hooks]
212 # do not use the site-wide hook
212 # do not use the site-wide hook
213 incoming =
213 incoming =
214 incoming.email = /my/email/hook
214 incoming.email = /my/email/hook
215 incoming.autobuild = /my/build/hook
215 incoming.autobuild = /my/build/hook
216
216
217 Most hooks are run with environment variables set that give added
217 Most hooks are run with environment variables set that give added
218 useful information. For each hook below, the environment variables
218 useful information. For each hook below, the environment variables
219 it is passed are listed with names of the form "$HG_foo".
219 it is passed are listed with names of the form "$HG_foo".
220
220
221 changegroup;;
221 changegroup;;
222 Run after a changegroup has been added via push, pull or
222 Run after a changegroup has been added via push, pull or
223 unbundle. ID of the first new changeset is in $HG_NODE. URL from
223 unbundle. ID of the first new changeset is in $HG_NODE. URL from
224 which changes came is in $HG_URL.
224 which changes came is in $HG_URL.
225 commit;;
225 commit;;
226 Run after a changeset has been created in the local repository.
226 Run after a changeset has been created in the local repository.
227 ID of the newly created changeset is in $HG_NODE. Parent
227 ID of the newly created changeset is in $HG_NODE. Parent
228 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
228 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
229 incoming;;
229 incoming;;
230 Run after a changeset has been pulled, pushed, or unbundled into
230 Run after a changeset has been pulled, pushed, or unbundled into
231 the local repository. The ID of the newly arrived changeset is in
231 the local repository. The ID of the newly arrived changeset is in
232 $HG_NODE. URL that was source of changes came is in $HG_URL.
232 $HG_NODE. URL that was source of changes came is in $HG_URL.
233 outgoing;;
233 outgoing;;
234 Run after sending changes from local repository to another. ID of
234 Run after sending changes from local repository to another. ID of
235 first changeset sent is in $HG_NODE. Source of operation is in
235 first changeset sent is in $HG_NODE. Source of operation is in
236 $HG_SOURCE; see "preoutgoing" hook for description.
236 $HG_SOURCE; see "preoutgoing" hook for description.
237 prechangegroup;;
237 prechangegroup;;
238 Run before a changegroup is added via push, pull or unbundle.
238 Run before a changegroup is added via push, pull or unbundle.
239 Exit status 0 allows the changegroup to proceed. Non-zero status
239 Exit status 0 allows the changegroup to proceed. Non-zero status
240 will cause the push, pull or unbundle to fail. URL from which
240 will cause the push, pull or unbundle to fail. URL from which
241 changes will come is in $HG_URL.
241 changes will come is in $HG_URL.
242 precommit;;
242 precommit;;
243 Run before starting a local commit. Exit status 0 allows the
243 Run before starting a local commit. Exit status 0 allows the
244 commit to proceed. Non-zero status will cause the commit to fail.
244 commit to proceed. Non-zero status will cause the commit to fail.
245 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
245 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
246 preoutgoing;;
246 preoutgoing;;
247 Run before computing changes to send from the local repository to
247 Run before computing changes to send from the local repository to
248 another. Non-zero status will cause failure. This lets you
248 another. Non-zero status will cause failure. This lets you
249 prevent pull over http or ssh. Also prevents against local pull,
249 prevent pull over http or ssh. Also prevents against local pull,
250 push (outbound) or bundle commands, but not effective, since you
250 push (outbound) or bundle commands, but not effective, since you
251 can just copy files instead then. Source of operation is in
251 can just copy files instead then. Source of operation is in
252 $HG_SOURCE. If "serve", operation is happening on behalf of
252 $HG_SOURCE. If "serve", operation is happening on behalf of
253 remote ssh or http repository. If "push", "pull" or "bundle",
253 remote ssh or http repository. If "push", "pull" or "bundle",
254 operation is happening on behalf of repository on same system.
254 operation is happening on behalf of repository on same system.
255 pretag;;
255 pretag;;
256 Run before creating a tag. Exit status 0 allows the tag to be
256 Run before creating a tag. Exit status 0 allows the tag to be
257 created. Non-zero status will cause the tag to fail. ID of
257 created. Non-zero status will cause the tag to fail. ID of
258 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
258 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
259 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
259 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
260 pretxnchangegroup;;
260 pretxnchangegroup;;
261 Run after a changegroup has been added via push, pull or unbundle,
261 Run after a changegroup has been added via push, pull or unbundle,
262 but before the transaction has been committed. Changegroup is
262 but before the transaction has been committed. Changegroup is
263 visible to hook program. This lets you validate incoming changes
263 visible to hook program. This lets you validate incoming changes
264 before accepting them. Passed the ID of the first new changeset
264 before accepting them. Passed the ID of the first new changeset
265 in $HG_NODE. Exit status 0 allows the transaction to commit.
265 in $HG_NODE. Exit status 0 allows the transaction to commit.
266 Non-zero status will cause the transaction to be rolled back and
266 Non-zero status will cause the transaction to be rolled back and
267 the push, pull or unbundle will fail. URL that was source of
267 the push, pull or unbundle will fail. URL that was source of
268 changes is in $HG_URL.
268 changes is in $HG_URL.
269 pretxncommit;;
269 pretxncommit;;
270 Run after a changeset has been created but the transaction not yet
270 Run after a changeset has been created but the transaction not yet
271 committed. Changeset is visible to hook program. This lets you
271 committed. Changeset is visible to hook program. This lets you
272 validate commit message and changes. Exit status 0 allows the
272 validate commit message and changes. Exit status 0 allows the
273 commit to proceed. Non-zero status will cause the transaction to
273 commit to proceed. Non-zero status will cause the transaction to
274 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
274 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
275 IDs are in $HG_PARENT1 and $HG_PARENT2.
275 IDs are in $HG_PARENT1 and $HG_PARENT2.
276 preupdate;;
276 preupdate;;
277 Run before updating the working directory. Exit status 0 allows
277 Run before updating the working directory. Exit status 0 allows
278 the update to proceed. Non-zero status will prevent the update.
278 the update to proceed. Non-zero status will prevent the update.
279 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
279 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
280 of second new parent is in $HG_PARENT2.
280 of second new parent is in $HG_PARENT2.
281 tag;;
281 tag;;
282 Run after a tag is created. ID of tagged changeset is in
282 Run after a tag is created. ID of tagged changeset is in
283 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
283 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
284 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
284 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
285 update;;
285 update;;
286 Run after updating the working directory. Changeset ID of first
286 Run after updating the working directory. Changeset ID of first
287 new parent is in $HG_PARENT1. If merge, ID of second new parent
287 new parent is in $HG_PARENT1. If merge, ID of second new parent
288 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
288 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
289 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
289 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
290
290
291 Note: In earlier releases, the names of hook environment variables
291 Note: In earlier releases, the names of hook environment variables
292 did not have a "HG_" prefix. The old unprefixed names are no longer
292 did not have a "HG_" prefix. The old unprefixed names are no longer
293 provided in the environment.
293 provided in the environment.
294
294
295 The syntax for Python hooks is as follows:
295 The syntax for Python hooks is as follows:
296
296
297 hookname = python:modulename.submodule.callable
297 hookname = python:modulename.submodule.callable
298
298
299 Python hooks are run within the Mercurial process. Each hook is
299 Python hooks are run within the Mercurial process. Each hook is
300 called with at least three keyword arguments: a ui object (keyword
300 called with at least three keyword arguments: a ui object (keyword
301 "ui"), a repository object (keyword "repo"), and a "hooktype"
301 "ui"), a repository object (keyword "repo"), and a "hooktype"
302 keyword that tells what kind of hook is used. Arguments listed as
302 keyword that tells what kind of hook is used. Arguments listed as
303 environment variables above are passed as keyword arguments, with no
303 environment variables above are passed as keyword arguments, with no
304 "HG_" prefix, and names in lower case.
304 "HG_" prefix, and names in lower case.
305
305
306 If a Python hook returns a "true" value or raises an exception, this
306 If a Python hook returns a "true" value or raises an exception, this
307 is treated as failure of the hook.
307 is treated as failure of the hook.
308
308
309 http_proxy::
309 http_proxy::
310 Used to access web-based Mercurial repositories through a HTTP
310 Used to access web-based Mercurial repositories through a HTTP
311 proxy.
311 proxy.
312 host;;
312 host;;
313 Host name and (optional) port of the proxy server, for example
313 Host name and (optional) port of the proxy server, for example
314 "myproxy:8000".
314 "myproxy:8000".
315 no;;
315 no;;
316 Optional. Comma-separated list of host names that should bypass
316 Optional. Comma-separated list of host names that should bypass
317 the proxy.
317 the proxy.
318 passwd;;
318 passwd;;
319 Optional. Password to authenticate with at the proxy server.
319 Optional. Password to authenticate with at the proxy server.
320 user;;
320 user;;
321 Optional. User name to authenticate with at the proxy server.
321 Optional. User name to authenticate with at the proxy server.
322
322
323 smtp::
323 smtp::
324 Configuration for extensions that need to send email messages.
324 Configuration for extensions that need to send email messages.
325 host;;
325 host;;
326 Host name of mail server, e.g. "mail.example.com".
326 Host name of mail server, e.g. "mail.example.com".
327 port;;
327 port;;
328 Optional. Port to connect to on mail server. Default: 25.
328 Optional. Port to connect to on mail server. Default: 25.
329 tls;;
329 tls;;
330 Optional. Whether to connect to mail server using TLS. True or
330 Optional. Whether to connect to mail server using TLS. True or
331 False. Default: False.
331 False. Default: False.
332 username;;
332 username;;
333 Optional. User name to authenticate to SMTP server with.
333 Optional. User name to authenticate to SMTP server with.
334 If username is specified, password must also be specified.
334 If username is specified, password must also be specified.
335 Default: none.
335 Default: none.
336 password;;
336 password;;
337 Optional. Password to authenticate to SMTP server with.
337 Optional. Password to authenticate to SMTP server with.
338 If username is specified, password must also be specified.
338 If username is specified, password must also be specified.
339 Default: none.
339 Default: none.
340 local_hostname;;
340 local_hostname;;
341 Optional. It's the hostname that the sender can use to identify itself
341 Optional. It's the hostname that the sender can use to identify itself
342 to the MTA.
342 to the MTA.
343
343
344 paths::
344 paths::
345 Assigns symbolic names to repositories. The left side is the
345 Assigns symbolic names to repositories. The left side is the
346 symbolic name, and the right gives the directory or URL that is the
346 symbolic name, and the right gives the directory or URL that is the
347 location of the repository. Default paths can be declared by
347 location of the repository. Default paths can be declared by
348 setting the following entries.
348 setting the following entries.
349 default;;
349 default;;
350 Directory or URL to use when pulling if no source is specified.
350 Directory or URL to use when pulling if no source is specified.
351 Default is set to repository from which the current repository
351 Default is set to repository from which the current repository
352 was cloned.
352 was cloned.
353 default-push;;
353 default-push;;
354 Optional. Directory or URL to use when pushing if no destination
354 Optional. Directory or URL to use when pushing if no destination
355 is specified.
355 is specified.
356
356
357 server::
357 server::
358 Controls generic server settings.
358 Controls generic server settings.
359 uncompressed;;
359 uncompressed;;
360 Whether to allow clients to clone a repo using the uncompressed
360 Whether to allow clients to clone a repo using the uncompressed
361 streaming protocol. This transfers about 40% more data than a
361 streaming protocol. This transfers about 40% more data than a
362 regular clone, but uses less memory and CPU on both server and
362 regular clone, but uses less memory and CPU on both server and
363 client. Over a LAN (100Mbps or better) or a very fast WAN, an
363 client. Over a LAN (100Mbps or better) or a very fast WAN, an
364 uncompressed streaming clone is a lot faster (~10x) than a regular
364 uncompressed streaming clone is a lot faster (~10x) than a regular
365 clone. Over most WAN connections (anything slower than about
365 clone. Over most WAN connections (anything slower than about
366 6Mbps), uncompressed streaming is slower, because of the extra
366 6Mbps), uncompressed streaming is slower, because of the extra
367 data transfer overhead. Default is False.
367 data transfer overhead. Default is False.
368
368
369 trusted::
369 trusted::
370 For security reasons, Mercurial will not use the settings in
370 For security reasons, Mercurial will not use the settings in
371 the .hg/hgrc file from a repository if it doesn't belong to a
371 the .hg/hgrc file from a repository if it doesn't belong to a
372 trusted user or to a trusted group. The main exception is the
372 trusted user or to a trusted group. The main exception is the
373 web interface, which automatically uses some safe settings, since
373 web interface, which automatically uses some safe settings, since
374 it's common to serve repositories from different users.
374 it's common to serve repositories from different users.
375
375
376 This section specifies what users and groups are trusted. The
376 This section specifies what users and groups are trusted. The
377 current user is always trusted. To trust everybody, list a user
377 current user is always trusted. To trust everybody, list a user
378 or a group with name "*".
378 or a group with name "*".
379
379
380 users;;
380 users;;
381 Comma-separated list of trusted users.
381 Comma-separated list of trusted users.
382 groups;;
382 groups;;
383 Comma-separated list of trusted groups.
383 Comma-separated list of trusted groups.
384
384
385 ui::
385 ui::
386 User interface controls.
386 User interface controls.
387 debug;;
387 debug;;
388 Print debugging information. True or False. Default is False.
388 Print debugging information. True or False. Default is False.
389 editor;;
389 editor;;
390 The editor to use during a commit. Default is $EDITOR or "vi".
390 The editor to use during a commit. Default is $EDITOR or "vi".
391 ignore;;
391 ignore;;
392 A file to read per-user ignore patterns from. This file should be in
392 A file to read per-user ignore patterns from. This file should be in
393 the same format as a repository-wide .hgignore file. This option
393 the same format as a repository-wide .hgignore file. This option
394 supports hook syntax, so if you want to specify multiple ignore
394 supports hook syntax, so if you want to specify multiple ignore
395 files, you can do so by setting something like
395 files, you can do so by setting something like
396 "ignore.other = ~/.hgignore2". For details of the ignore file
396 "ignore.other = ~/.hgignore2". For details of the ignore file
397 format, see the hgignore(5) man page.
397 format, see the hgignore(5) man page.
398 interactive;;
398 interactive;;
399 Allow to prompt the user. True or False. Default is True.
399 Allow to prompt the user. True or False. Default is True.
400 logtemplate;;
400 logtemplate;;
401 Template string for commands that print changesets.
401 Template string for commands that print changesets.
402 style;;
402 style;;
403 Name of style to use for command output.
403 Name of style to use for command output.
404 merge;;
404 merge;;
405 The conflict resolution program to use during a manual merge.
405 The conflict resolution program to use during a manual merge.
406 Default is "hgmerge".
406 Default is "hgmerge".
407 quiet;;
407 quiet;;
408 Reduce the amount of output printed. True or False. Default is False.
408 Reduce the amount of output printed. True or False. Default is False.
409 remotecmd;;
409 remotecmd;;
410 remote command to use for clone/push/pull operations. Default is 'hg'.
410 remote command to use for clone/push/pull operations. Default is 'hg'.
411 ssh;;
411 ssh;;
412 command to use for SSH connections. Default is 'ssh'.
412 command to use for SSH connections. Default is 'ssh'.
413 strict;;
413 strict;;
414 Require exact command names, instead of allowing unambiguous
414 Require exact command names, instead of allowing unambiguous
415 abbreviations. True or False. Default is False.
415 abbreviations. True or False. Default is False.
416 timeout;;
416 timeout;;
417 The timeout used when a lock is held (in seconds), a negative value
417 The timeout used when a lock is held (in seconds), a negative value
418 means no timeout. Default is 600.
418 means no timeout. Default is 600.
419 username;;
419 username;;
420 The committer of a changeset created when running "commit".
420 The committer of a changeset created when running "commit".
421 Typically a person's name and email address, e.g. "Fred Widget
421 Typically a person's name and email address, e.g. "Fred Widget
422 <fred@example.com>". Default is $EMAIL. If no default is found, or the
422 <fred@example.com>". Default is $EMAIL or username@hostname.
423 configured username is empty, it has to be specified manually.
424 verbose;;
423 verbose;;
425 Increase the amount of output printed. True or False. Default is False.
424 Increase the amount of output printed. True or False. Default is False.
426
425
427
426
428 web::
427 web::
429 Web interface configuration.
428 Web interface configuration.
430 accesslog;;
429 accesslog;;
431 Where to output the access log. Default is stdout.
430 Where to output the access log. Default is stdout.
432 address;;
431 address;;
433 Interface address to bind to. Default is all.
432 Interface address to bind to. Default is all.
434 allow_archive;;
433 allow_archive;;
435 List of archive format (bz2, gz, zip) allowed for downloading.
434 List of archive format (bz2, gz, zip) allowed for downloading.
436 Default is empty.
435 Default is empty.
437 allowbz2;;
436 allowbz2;;
438 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
437 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
439 Default is false.
438 Default is false.
440 allowgz;;
439 allowgz;;
441 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
440 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
442 Default is false.
441 Default is false.
443 allowpull;;
442 allowpull;;
444 Whether to allow pulling from the repository. Default is true.
443 Whether to allow pulling from the repository. Default is true.
445 allow_push;;
444 allow_push;;
446 Whether to allow pushing to the repository. If empty or not set,
445 Whether to allow pushing to the repository. If empty or not set,
447 push is not allowed. If the special value "*", any remote user
446 push is not allowed. If the special value "*", any remote user
448 can push, including unauthenticated users. Otherwise, the remote
447 can push, including unauthenticated users. Otherwise, the remote
449 user must have been authenticated, and the authenticated user name
448 user must have been authenticated, and the authenticated user name
450 must be present in this list (separated by whitespace or ",").
449 must be present in this list (separated by whitespace or ",").
451 The contents of the allow_push list are examined after the
450 The contents of the allow_push list are examined after the
452 deny_push list.
451 deny_push list.
453 allowzip;;
452 allowzip;;
454 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
453 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
455 Default is false. This feature creates temporary files.
454 Default is false. This feature creates temporary files.
456 baseurl;;
455 baseurl;;
457 Base URL to use when publishing URLs in other locations, so
456 Base URL to use when publishing URLs in other locations, so
458 third-party tools like email notification hooks can construct URLs.
457 third-party tools like email notification hooks can construct URLs.
459 Example: "http://hgserver/repos/"
458 Example: "http://hgserver/repos/"
460 contact;;
459 contact;;
461 Name or email address of the person in charge of the repository.
460 Name or email address of the person in charge of the repository.
462 Default is "unknown".
461 Default is "unknown".
463 deny_push;;
462 deny_push;;
464 Whether to deny pushing to the repository. If empty or not set,
463 Whether to deny pushing to the repository. If empty or not set,
465 push is not denied. If the special value "*", all remote users
464 push is not denied. If the special value "*", all remote users
466 are denied push. Otherwise, unauthenticated users are all denied,
465 are denied push. Otherwise, unauthenticated users are all denied,
467 and any authenticated user name present in this list (separated by
466 and any authenticated user name present in this list (separated by
468 whitespace or ",") is also denied. The contents of the deny_push
467 whitespace or ",") is also denied. The contents of the deny_push
469 list are examined before the allow_push list.
468 list are examined before the allow_push list.
470 description;;
469 description;;
471 Textual description of the repository's purpose or contents.
470 Textual description of the repository's purpose or contents.
472 Default is "unknown".
471 Default is "unknown".
473 errorlog;;
472 errorlog;;
474 Where to output the error log. Default is stderr.
473 Where to output the error log. Default is stderr.
475 ipv6;;
474 ipv6;;
476 Whether to use IPv6. Default is false.
475 Whether to use IPv6. Default is false.
477 name;;
476 name;;
478 Repository name to use in the web interface. Default is current
477 Repository name to use in the web interface. Default is current
479 working directory.
478 working directory.
480 maxchanges;;
479 maxchanges;;
481 Maximum number of changes to list on the changelog. Default is 10.
480 Maximum number of changes to list on the changelog. Default is 10.
482 maxfiles;;
481 maxfiles;;
483 Maximum number of files to list per changeset. Default is 10.
482 Maximum number of files to list per changeset. Default is 10.
484 port;;
483 port;;
485 Port to listen on. Default is 8000.
484 Port to listen on. Default is 8000.
486 push_ssl;;
485 push_ssl;;
487 Whether to require that inbound pushes be transported over SSL to
486 Whether to require that inbound pushes be transported over SSL to
488 prevent password sniffing. Default is true.
487 prevent password sniffing. Default is true.
489 stripes;;
488 stripes;;
490 How many lines a "zebra stripe" should span in multiline output.
489 How many lines a "zebra stripe" should span in multiline output.
491 Default is 1; set to 0 to disable.
490 Default is 1; set to 0 to disable.
492 style;;
491 style;;
493 Which template map style to use.
492 Which template map style to use.
494 templates;;
493 templates;;
495 Where to find the HTML templates. Default is install path.
494 Where to find the HTML templates. Default is install path.
496
495
497
496
498 AUTHOR
497 AUTHOR
499 ------
498 ------
500 Bryan O'Sullivan <bos@serpentine.com>.
499 Bryan O'Sullivan <bos@serpentine.com>.
501
500
502 Mercurial was written by Matt Mackall <mpm@selenic.com>.
501 Mercurial was written by Matt Mackall <mpm@selenic.com>.
503
502
504 SEE ALSO
503 SEE ALSO
505 --------
504 --------
506 hg(1), hgignore(5)
505 hg(1), hgignore(5)
507
506
508 COPYING
507 COPYING
509 -------
508 -------
510 This manual page is copyright 2005 Bryan O'Sullivan.
509 This manual page is copyright 2005 Bryan O'Sullivan.
511 Mercurial is copyright 2005, 2006 Matt Mackall.
510 Mercurial is copyright 2005, 2006 Matt Mackall.
512 Free use of this software is granted under the terms of the GNU General
511 Free use of this software is granted under the terms of the GNU General
513 Public License (GPL).
512 Public License (GPL).
@@ -1,1896 +1,1897 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import *
8 from node import *
9 from i18n import gettext as _
9 from i18n import gettext as _
10 from demandload import *
10 from demandload import *
11 import repo
11 import repo
12 demandload(globals(), "appendfile changegroup")
12 demandload(globals(), "appendfile changegroup")
13 demandload(globals(), "changelog dirstate filelog manifest context")
13 demandload(globals(), "changelog dirstate filelog manifest context")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
15 demandload(globals(), "os revlog time util")
15 demandload(globals(), "os revlog time util")
16
16
17 class localrepository(repo.repository):
17 class localrepository(repo.repository):
18 capabilities = ('lookup', 'changegroupsubset')
18 capabilities = ('lookup', 'changegroupsubset')
19
19
20 def __del__(self):
20 def __del__(self):
21 self.transhandle = None
21 self.transhandle = None
22 def __init__(self, parentui, path=None, create=0):
22 def __init__(self, parentui, path=None, create=0):
23 repo.repository.__init__(self)
23 repo.repository.__init__(self)
24 if not path:
24 if not path:
25 p = os.getcwd()
25 p = os.getcwd()
26 while not os.path.isdir(os.path.join(p, ".hg")):
26 while not os.path.isdir(os.path.join(p, ".hg")):
27 oldp = p
27 oldp = p
28 p = os.path.dirname(p)
28 p = os.path.dirname(p)
29 if p == oldp:
29 if p == oldp:
30 raise repo.RepoError(_("There is no Mercurial repository"
30 raise repo.RepoError(_("There is no Mercurial repository"
31 " here (.hg not found)"))
31 " here (.hg not found)"))
32 path = p
32 path = p
33 self.path = os.path.join(path, ".hg")
33 self.path = os.path.join(path, ".hg")
34
34
35 if not os.path.isdir(self.path):
35 if not os.path.isdir(self.path):
36 if create:
36 if create:
37 if not os.path.exists(path):
37 if not os.path.exists(path):
38 os.mkdir(path)
38 os.mkdir(path)
39 os.mkdir(self.path)
39 os.mkdir(self.path)
40 else:
40 else:
41 raise repo.RepoError(_("repository %s not found") % path)
41 raise repo.RepoError(_("repository %s not found") % path)
42 elif create:
42 elif create:
43 raise repo.RepoError(_("repository %s already exists") % path)
43 raise repo.RepoError(_("repository %s already exists") % path)
44
44
45 self.root = os.path.realpath(path)
45 self.root = os.path.realpath(path)
46 self.origroot = path
46 self.origroot = path
47 self.ui = ui.ui(parentui=parentui)
47 self.ui = ui.ui(parentui=parentui)
48 self.opener = util.opener(self.path)
48 self.opener = util.opener(self.path)
49 self.sopener = util.opener(self.path)
49 self.sopener = util.opener(self.path)
50 self.wopener = util.opener(self.root)
50 self.wopener = util.opener(self.root)
51
51
52 try:
52 try:
53 self.ui.readconfig(self.join("hgrc"), self.root)
53 self.ui.readconfig(self.join("hgrc"), self.root)
54 except IOError:
54 except IOError:
55 pass
55 pass
56
56
57 v = self.ui.configrevlog()
57 v = self.ui.configrevlog()
58 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
58 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
59 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
59 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
60 fl = v.get('flags', None)
60 fl = v.get('flags', None)
61 flags = 0
61 flags = 0
62 if fl != None:
62 if fl != None:
63 for x in fl.split():
63 for x in fl.split():
64 flags |= revlog.flagstr(x)
64 flags |= revlog.flagstr(x)
65 elif self.revlogv1:
65 elif self.revlogv1:
66 flags = revlog.REVLOG_DEFAULT_FLAGS
66 flags = revlog.REVLOG_DEFAULT_FLAGS
67
67
68 v = self.revlogversion | flags
68 v = self.revlogversion | flags
69 self.manifest = manifest.manifest(self.sopener, v)
69 self.manifest = manifest.manifest(self.sopener, v)
70 self.changelog = changelog.changelog(self.sopener, v)
70 self.changelog = changelog.changelog(self.sopener, v)
71
71
72 # the changelog might not have the inline index flag
72 # the changelog might not have the inline index flag
73 # on. If the format of the changelog is the same as found in
73 # on. If the format of the changelog is the same as found in
74 # .hgrc, apply any flags found in the .hgrc as well.
74 # .hgrc, apply any flags found in the .hgrc as well.
75 # Otherwise, just version from the changelog
75 # Otherwise, just version from the changelog
76 v = self.changelog.version
76 v = self.changelog.version
77 if v == self.revlogversion:
77 if v == self.revlogversion:
78 v |= flags
78 v |= flags
79 self.revlogversion = v
79 self.revlogversion = v
80
80
81 self.tagscache = None
81 self.tagscache = None
82 self.branchcache = None
82 self.branchcache = None
83 self.nodetagscache = None
83 self.nodetagscache = None
84 self.encodepats = None
84 self.encodepats = None
85 self.decodepats = None
85 self.decodepats = None
86 self.transhandle = None
86 self.transhandle = None
87
87
88 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
88 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
89
89
90 def url(self):
90 def url(self):
91 return 'file:' + self.root
91 return 'file:' + self.root
92
92
93 def hook(self, name, throw=False, **args):
93 def hook(self, name, throw=False, **args):
94 def callhook(hname, funcname):
94 def callhook(hname, funcname):
95 '''call python hook. hook is callable object, looked up as
95 '''call python hook. hook is callable object, looked up as
96 name in python module. if callable returns "true", hook
96 name in python module. if callable returns "true", hook
97 fails, else passes. if hook raises exception, treated as
97 fails, else passes. if hook raises exception, treated as
98 hook failure. exception propagates if throw is "true".
98 hook failure. exception propagates if throw is "true".
99
99
100 reason for "true" meaning "hook failed" is so that
100 reason for "true" meaning "hook failed" is so that
101 unmodified commands (e.g. mercurial.commands.update) can
101 unmodified commands (e.g. mercurial.commands.update) can
102 be run as hooks without wrappers to convert return values.'''
102 be run as hooks without wrappers to convert return values.'''
103
103
104 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
104 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
105 d = funcname.rfind('.')
105 d = funcname.rfind('.')
106 if d == -1:
106 if d == -1:
107 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
107 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
108 % (hname, funcname))
108 % (hname, funcname))
109 modname = funcname[:d]
109 modname = funcname[:d]
110 try:
110 try:
111 obj = __import__(modname)
111 obj = __import__(modname)
112 except ImportError:
112 except ImportError:
113 try:
113 try:
114 # extensions are loaded with hgext_ prefix
114 # extensions are loaded with hgext_ prefix
115 obj = __import__("hgext_%s" % modname)
115 obj = __import__("hgext_%s" % modname)
116 except ImportError:
116 except ImportError:
117 raise util.Abort(_('%s hook is invalid '
117 raise util.Abort(_('%s hook is invalid '
118 '(import of "%s" failed)') %
118 '(import of "%s" failed)') %
119 (hname, modname))
119 (hname, modname))
120 try:
120 try:
121 for p in funcname.split('.')[1:]:
121 for p in funcname.split('.')[1:]:
122 obj = getattr(obj, p)
122 obj = getattr(obj, p)
123 except AttributeError, err:
123 except AttributeError, err:
124 raise util.Abort(_('%s hook is invalid '
124 raise util.Abort(_('%s hook is invalid '
125 '("%s" is not defined)') %
125 '("%s" is not defined)') %
126 (hname, funcname))
126 (hname, funcname))
127 if not callable(obj):
127 if not callable(obj):
128 raise util.Abort(_('%s hook is invalid '
128 raise util.Abort(_('%s hook is invalid '
129 '("%s" is not callable)') %
129 '("%s" is not callable)') %
130 (hname, funcname))
130 (hname, funcname))
131 try:
131 try:
132 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
132 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
133 except (KeyboardInterrupt, util.SignalInterrupt):
133 except (KeyboardInterrupt, util.SignalInterrupt):
134 raise
134 raise
135 except Exception, exc:
135 except Exception, exc:
136 if isinstance(exc, util.Abort):
136 if isinstance(exc, util.Abort):
137 self.ui.warn(_('error: %s hook failed: %s\n') %
137 self.ui.warn(_('error: %s hook failed: %s\n') %
138 (hname, exc.args[0]))
138 (hname, exc.args[0]))
139 else:
139 else:
140 self.ui.warn(_('error: %s hook raised an exception: '
140 self.ui.warn(_('error: %s hook raised an exception: '
141 '%s\n') % (hname, exc))
141 '%s\n') % (hname, exc))
142 if throw:
142 if throw:
143 raise
143 raise
144 self.ui.print_exc()
144 self.ui.print_exc()
145 return True
145 return True
146 if r:
146 if r:
147 if throw:
147 if throw:
148 raise util.Abort(_('%s hook failed') % hname)
148 raise util.Abort(_('%s hook failed') % hname)
149 self.ui.warn(_('warning: %s hook failed\n') % hname)
149 self.ui.warn(_('warning: %s hook failed\n') % hname)
150 return r
150 return r
151
151
152 def runhook(name, cmd):
152 def runhook(name, cmd):
153 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
153 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
154 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
154 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
155 r = util.system(cmd, environ=env, cwd=self.root)
155 r = util.system(cmd, environ=env, cwd=self.root)
156 if r:
156 if r:
157 desc, r = util.explain_exit(r)
157 desc, r = util.explain_exit(r)
158 if throw:
158 if throw:
159 raise util.Abort(_('%s hook %s') % (name, desc))
159 raise util.Abort(_('%s hook %s') % (name, desc))
160 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
160 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
161 return r
161 return r
162
162
163 r = False
163 r = False
164 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
164 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
165 if hname.split(".", 1)[0] == name and cmd]
165 if hname.split(".", 1)[0] == name and cmd]
166 hooks.sort()
166 hooks.sort()
167 for hname, cmd in hooks:
167 for hname, cmd in hooks:
168 if cmd.startswith('python:'):
168 if cmd.startswith('python:'):
169 r = callhook(hname, cmd[7:].strip()) or r
169 r = callhook(hname, cmd[7:].strip()) or r
170 else:
170 else:
171 r = runhook(hname, cmd) or r
171 r = runhook(hname, cmd) or r
172 return r
172 return r
173
173
174 tag_disallowed = ':\r\n'
174 tag_disallowed = ':\r\n'
175
175
176 def tag(self, name, node, message, local, user, date):
176 def tag(self, name, node, message, local, user, date):
177 '''tag a revision with a symbolic name.
177 '''tag a revision with a symbolic name.
178
178
179 if local is True, the tag is stored in a per-repository file.
179 if local is True, the tag is stored in a per-repository file.
180 otherwise, it is stored in the .hgtags file, and a new
180 otherwise, it is stored in the .hgtags file, and a new
181 changeset is committed with the change.
181 changeset is committed with the change.
182
182
183 keyword arguments:
183 keyword arguments:
184
184
185 local: whether to store tag in non-version-controlled file
185 local: whether to store tag in non-version-controlled file
186 (default False)
186 (default False)
187
187
188 message: commit message to use if committing
188 message: commit message to use if committing
189
189
190 user: name of user to use if committing
190 user: name of user to use if committing
191
191
192 date: date tuple to use if committing'''
192 date: date tuple to use if committing'''
193
193
194 for c in self.tag_disallowed:
194 for c in self.tag_disallowed:
195 if c in name:
195 if c in name:
196 raise util.Abort(_('%r cannot be used in a tag name') % c)
196 raise util.Abort(_('%r cannot be used in a tag name') % c)
197
197
198 self.hook('pretag', throw=True, node=hex(node), tag=name, local=local)
198 self.hook('pretag', throw=True, node=hex(node), tag=name, local=local)
199
199
200 if local:
200 if local:
201 self.opener('localtags', 'a').write('%s %s\n' % (hex(node), name))
201 self.opener('localtags', 'a').write('%s %s\n' % (hex(node), name))
202 self.hook('tag', node=hex(node), tag=name, local=local)
202 self.hook('tag', node=hex(node), tag=name, local=local)
203 return
203 return
204
204
205 for x in self.status()[:5]:
205 for x in self.status()[:5]:
206 if '.hgtags' in x:
206 if '.hgtags' in x:
207 raise util.Abort(_('working copy of .hgtags is changed '
207 raise util.Abort(_('working copy of .hgtags is changed '
208 '(please commit .hgtags manually)'))
208 '(please commit .hgtags manually)'))
209
209
210 self.wfile('.hgtags', 'ab').write('%s %s\n' % (hex(node), name))
210 self.wfile('.hgtags', 'ab').write('%s %s\n' % (hex(node), name))
211 if self.dirstate.state('.hgtags') == '?':
211 if self.dirstate.state('.hgtags') == '?':
212 self.add(['.hgtags'])
212 self.add(['.hgtags'])
213
213
214 self.commit(['.hgtags'], message, user, date)
214 self.commit(['.hgtags'], message, user, date)
215 self.hook('tag', node=hex(node), tag=name, local=local)
215 self.hook('tag', node=hex(node), tag=name, local=local)
216
216
217 def tags(self):
217 def tags(self):
218 '''return a mapping of tag to node'''
218 '''return a mapping of tag to node'''
219 if not self.tagscache:
219 if not self.tagscache:
220 self.tagscache = {}
220 self.tagscache = {}
221
221
222 def parsetag(line, context):
222 def parsetag(line, context):
223 if not line:
223 if not line:
224 return
224 return
225 s = l.split(" ", 1)
225 s = l.split(" ", 1)
226 if len(s) != 2:
226 if len(s) != 2:
227 self.ui.warn(_("%s: cannot parse entry\n") % context)
227 self.ui.warn(_("%s: cannot parse entry\n") % context)
228 return
228 return
229 node, key = s
229 node, key = s
230 key = key.strip()
230 key = key.strip()
231 try:
231 try:
232 bin_n = bin(node)
232 bin_n = bin(node)
233 except TypeError:
233 except TypeError:
234 self.ui.warn(_("%s: node '%s' is not well formed\n") %
234 self.ui.warn(_("%s: node '%s' is not well formed\n") %
235 (context, node))
235 (context, node))
236 return
236 return
237 if bin_n not in self.changelog.nodemap:
237 if bin_n not in self.changelog.nodemap:
238 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
238 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
239 (context, key))
239 (context, key))
240 return
240 return
241 self.tagscache[key] = bin_n
241 self.tagscache[key] = bin_n
242
242
243 # read the tags file from each head, ending with the tip,
243 # read the tags file from each head, ending with the tip,
244 # and add each tag found to the map, with "newer" ones
244 # and add each tag found to the map, with "newer" ones
245 # taking precedence
245 # taking precedence
246 f = None
246 f = None
247 for rev, node, fnode in self._hgtagsnodes():
247 for rev, node, fnode in self._hgtagsnodes():
248 f = (f and f.filectx(fnode) or
248 f = (f and f.filectx(fnode) or
249 self.filectx('.hgtags', fileid=fnode))
249 self.filectx('.hgtags', fileid=fnode))
250 count = 0
250 count = 0
251 for l in f.data().splitlines():
251 for l in f.data().splitlines():
252 count += 1
252 count += 1
253 parsetag(l, _("%s, line %d") % (str(f), count))
253 parsetag(l, _("%s, line %d") % (str(f), count))
254
254
255 try:
255 try:
256 f = self.opener("localtags")
256 f = self.opener("localtags")
257 count = 0
257 count = 0
258 for l in f:
258 for l in f:
259 count += 1
259 count += 1
260 parsetag(l, _("localtags, line %d") % count)
260 parsetag(l, _("localtags, line %d") % count)
261 except IOError:
261 except IOError:
262 pass
262 pass
263
263
264 self.tagscache['tip'] = self.changelog.tip()
264 self.tagscache['tip'] = self.changelog.tip()
265
265
266 return self.tagscache
266 return self.tagscache
267
267
268 def _hgtagsnodes(self):
268 def _hgtagsnodes(self):
269 heads = self.heads()
269 heads = self.heads()
270 heads.reverse()
270 heads.reverse()
271 last = {}
271 last = {}
272 ret = []
272 ret = []
273 for node in heads:
273 for node in heads:
274 c = self.changectx(node)
274 c = self.changectx(node)
275 rev = c.rev()
275 rev = c.rev()
276 try:
276 try:
277 fnode = c.filenode('.hgtags')
277 fnode = c.filenode('.hgtags')
278 except repo.LookupError:
278 except repo.LookupError:
279 continue
279 continue
280 ret.append((rev, node, fnode))
280 ret.append((rev, node, fnode))
281 if fnode in last:
281 if fnode in last:
282 ret[last[fnode]] = None
282 ret[last[fnode]] = None
283 last[fnode] = len(ret) - 1
283 last[fnode] = len(ret) - 1
284 return [item for item in ret if item]
284 return [item for item in ret if item]
285
285
286 def tagslist(self):
286 def tagslist(self):
287 '''return a list of tags ordered by revision'''
287 '''return a list of tags ordered by revision'''
288 l = []
288 l = []
289 for t, n in self.tags().items():
289 for t, n in self.tags().items():
290 try:
290 try:
291 r = self.changelog.rev(n)
291 r = self.changelog.rev(n)
292 except:
292 except:
293 r = -2 # sort to the beginning of the list if unknown
293 r = -2 # sort to the beginning of the list if unknown
294 l.append((r, t, n))
294 l.append((r, t, n))
295 l.sort()
295 l.sort()
296 return [(t, n) for r, t, n in l]
296 return [(t, n) for r, t, n in l]
297
297
298 def nodetags(self, node):
298 def nodetags(self, node):
299 '''return the tags associated with a node'''
299 '''return the tags associated with a node'''
300 if not self.nodetagscache:
300 if not self.nodetagscache:
301 self.nodetagscache = {}
301 self.nodetagscache = {}
302 for t, n in self.tags().items():
302 for t, n in self.tags().items():
303 self.nodetagscache.setdefault(n, []).append(t)
303 self.nodetagscache.setdefault(n, []).append(t)
304 return self.nodetagscache.get(node, [])
304 return self.nodetagscache.get(node, [])
305
305
306 def branchtags(self):
306 def branchtags(self):
307 if self.branchcache != None:
307 if self.branchcache != None:
308 return self.branchcache
308 return self.branchcache
309
309
310 self.branchcache = {} # avoid recursion in changectx
310 self.branchcache = {} # avoid recursion in changectx
311
311
312 partial, last, lrev = self._readbranchcache()
312 partial, last, lrev = self._readbranchcache()
313
313
314 tiprev = self.changelog.count() - 1
314 tiprev = self.changelog.count() - 1
315 if lrev != tiprev:
315 if lrev != tiprev:
316 self._updatebranchcache(partial, lrev+1, tiprev+1)
316 self._updatebranchcache(partial, lrev+1, tiprev+1)
317 self._writebranchcache(partial, self.changelog.tip(), tiprev)
317 self._writebranchcache(partial, self.changelog.tip(), tiprev)
318
318
319 self.branchcache = partial
319 self.branchcache = partial
320 return self.branchcache
320 return self.branchcache
321
321
322 def _readbranchcache(self):
322 def _readbranchcache(self):
323 partial = {}
323 partial = {}
324 try:
324 try:
325 f = self.opener("branches.cache")
325 f = self.opener("branches.cache")
326 lines = f.read().split('\n')
326 lines = f.read().split('\n')
327 f.close()
327 f.close()
328 last, lrev = lines.pop(0).rstrip().split(" ", 1)
328 last, lrev = lines.pop(0).rstrip().split(" ", 1)
329 last, lrev = bin(last), int(lrev)
329 last, lrev = bin(last), int(lrev)
330 if (lrev < self.changelog.count() and
330 if (lrev < self.changelog.count() and
331 self.changelog.node(lrev) == last): # sanity check
331 self.changelog.node(lrev) == last): # sanity check
332 for l in lines:
332 for l in lines:
333 if not l: continue
333 if not l: continue
334 node, label = l.rstrip().split(" ", 1)
334 node, label = l.rstrip().split(" ", 1)
335 partial[label] = bin(node)
335 partial[label] = bin(node)
336 else: # invalidate the cache
336 else: # invalidate the cache
337 last, lrev = nullid, nullrev
337 last, lrev = nullid, nullrev
338 except IOError:
338 except IOError:
339 last, lrev = nullid, nullrev
339 last, lrev = nullid, nullrev
340 return partial, last, lrev
340 return partial, last, lrev
341
341
342 def _writebranchcache(self, branches, tip, tiprev):
342 def _writebranchcache(self, branches, tip, tiprev):
343 try:
343 try:
344 f = self.opener("branches.cache", "w")
344 f = self.opener("branches.cache", "w")
345 f.write("%s %s\n" % (hex(tip), tiprev))
345 f.write("%s %s\n" % (hex(tip), tiprev))
346 for label, node in branches.iteritems():
346 for label, node in branches.iteritems():
347 f.write("%s %s\n" % (hex(node), label))
347 f.write("%s %s\n" % (hex(node), label))
348 except IOError:
348 except IOError:
349 pass
349 pass
350
350
351 def _updatebranchcache(self, partial, start, end):
351 def _updatebranchcache(self, partial, start, end):
352 for r in xrange(start, end):
352 for r in xrange(start, end):
353 c = self.changectx(r)
353 c = self.changectx(r)
354 b = c.branch()
354 b = c.branch()
355 if b:
355 if b:
356 partial[b] = c.node()
356 partial[b] = c.node()
357
357
358 def lookup(self, key):
358 def lookup(self, key):
359 if key == '.':
359 if key == '.':
360 key = self.dirstate.parents()[0]
360 key = self.dirstate.parents()[0]
361 if key == nullid:
361 if key == nullid:
362 raise repo.RepoError(_("no revision checked out"))
362 raise repo.RepoError(_("no revision checked out"))
363 n = self.changelog._match(key)
363 n = self.changelog._match(key)
364 if n:
364 if n:
365 return n
365 return n
366 if key in self.tags():
366 if key in self.tags():
367 return self.tags()[key]
367 return self.tags()[key]
368 if key in self.branchtags():
368 if key in self.branchtags():
369 return self.branchtags()[key]
369 return self.branchtags()[key]
370 n = self.changelog._partialmatch(key)
370 n = self.changelog._partialmatch(key)
371 if n:
371 if n:
372 return n
372 return n
373 raise repo.RepoError(_("unknown revision '%s'") % key)
373 raise repo.RepoError(_("unknown revision '%s'") % key)
374
374
375 def dev(self):
375 def dev(self):
376 return os.lstat(self.path).st_dev
376 return os.lstat(self.path).st_dev
377
377
378 def local(self):
378 def local(self):
379 return True
379 return True
380
380
381 def join(self, f):
381 def join(self, f):
382 return os.path.join(self.path, f)
382 return os.path.join(self.path, f)
383
383
384 def sjoin(self, f):
384 def sjoin(self, f):
385 return os.path.join(self.path, f)
385 return os.path.join(self.path, f)
386
386
387 def wjoin(self, f):
387 def wjoin(self, f):
388 return os.path.join(self.root, f)
388 return os.path.join(self.root, f)
389
389
390 def file(self, f):
390 def file(self, f):
391 if f[0] == '/':
391 if f[0] == '/':
392 f = f[1:]
392 f = f[1:]
393 return filelog.filelog(self.sopener, f, self.revlogversion)
393 return filelog.filelog(self.sopener, f, self.revlogversion)
394
394
395 def changectx(self, changeid=None):
395 def changectx(self, changeid=None):
396 return context.changectx(self, changeid)
396 return context.changectx(self, changeid)
397
397
398 def workingctx(self):
398 def workingctx(self):
399 return context.workingctx(self)
399 return context.workingctx(self)
400
400
401 def parents(self, changeid=None):
401 def parents(self, changeid=None):
402 '''
402 '''
403 get list of changectxs for parents of changeid or working directory
403 get list of changectxs for parents of changeid or working directory
404 '''
404 '''
405 if changeid is None:
405 if changeid is None:
406 pl = self.dirstate.parents()
406 pl = self.dirstate.parents()
407 else:
407 else:
408 n = self.changelog.lookup(changeid)
408 n = self.changelog.lookup(changeid)
409 pl = self.changelog.parents(n)
409 pl = self.changelog.parents(n)
410 if pl[1] == nullid:
410 if pl[1] == nullid:
411 return [self.changectx(pl[0])]
411 return [self.changectx(pl[0])]
412 return [self.changectx(pl[0]), self.changectx(pl[1])]
412 return [self.changectx(pl[0]), self.changectx(pl[1])]
413
413
414 def filectx(self, path, changeid=None, fileid=None):
414 def filectx(self, path, changeid=None, fileid=None):
415 """changeid can be a changeset revision, node, or tag.
415 """changeid can be a changeset revision, node, or tag.
416 fileid can be a file revision or node."""
416 fileid can be a file revision or node."""
417 return context.filectx(self, path, changeid, fileid)
417 return context.filectx(self, path, changeid, fileid)
418
418
419 def getcwd(self):
419 def getcwd(self):
420 return self.dirstate.getcwd()
420 return self.dirstate.getcwd()
421
421
422 def wfile(self, f, mode='r'):
422 def wfile(self, f, mode='r'):
423 return self.wopener(f, mode)
423 return self.wopener(f, mode)
424
424
425 def wread(self, filename):
425 def wread(self, filename):
426 if self.encodepats == None:
426 if self.encodepats == None:
427 l = []
427 l = []
428 for pat, cmd in self.ui.configitems("encode"):
428 for pat, cmd in self.ui.configitems("encode"):
429 mf = util.matcher(self.root, "", [pat], [], [])[1]
429 mf = util.matcher(self.root, "", [pat], [], [])[1]
430 l.append((mf, cmd))
430 l.append((mf, cmd))
431 self.encodepats = l
431 self.encodepats = l
432
432
433 data = self.wopener(filename, 'r').read()
433 data = self.wopener(filename, 'r').read()
434
434
435 for mf, cmd in self.encodepats:
435 for mf, cmd in self.encodepats:
436 if mf(filename):
436 if mf(filename):
437 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
437 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
438 data = util.filter(data, cmd)
438 data = util.filter(data, cmd)
439 break
439 break
440
440
441 return data
441 return data
442
442
443 def wwrite(self, filename, data, fd=None):
443 def wwrite(self, filename, data, fd=None):
444 if self.decodepats == None:
444 if self.decodepats == None:
445 l = []
445 l = []
446 for pat, cmd in self.ui.configitems("decode"):
446 for pat, cmd in self.ui.configitems("decode"):
447 mf = util.matcher(self.root, "", [pat], [], [])[1]
447 mf = util.matcher(self.root, "", [pat], [], [])[1]
448 l.append((mf, cmd))
448 l.append((mf, cmd))
449 self.decodepats = l
449 self.decodepats = l
450
450
451 for mf, cmd in self.decodepats:
451 for mf, cmd in self.decodepats:
452 if mf(filename):
452 if mf(filename):
453 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
453 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
454 data = util.filter(data, cmd)
454 data = util.filter(data, cmd)
455 break
455 break
456
456
457 if fd:
457 if fd:
458 return fd.write(data)
458 return fd.write(data)
459 return self.wopener(filename, 'w').write(data)
459 return self.wopener(filename, 'w').write(data)
460
460
461 def transaction(self):
461 def transaction(self):
462 tr = self.transhandle
462 tr = self.transhandle
463 if tr != None and tr.running():
463 if tr != None and tr.running():
464 return tr.nest()
464 return tr.nest()
465
465
466 # save dirstate for rollback
466 # save dirstate for rollback
467 try:
467 try:
468 ds = self.opener("dirstate").read()
468 ds = self.opener("dirstate").read()
469 except IOError:
469 except IOError:
470 ds = ""
470 ds = ""
471 self.opener("journal.dirstate", "w").write(ds)
471 self.opener("journal.dirstate", "w").write(ds)
472
472
473 tr = transaction.transaction(self.ui.warn, self.sopener,
473 tr = transaction.transaction(self.ui.warn, self.sopener,
474 self.sjoin("journal"),
474 self.sjoin("journal"),
475 aftertrans(self.path))
475 aftertrans(self.path))
476 self.transhandle = tr
476 self.transhandle = tr
477 return tr
477 return tr
478
478
479 def recover(self):
479 def recover(self):
480 l = self.lock()
480 l = self.lock()
481 if os.path.exists(self.sjoin("journal")):
481 if os.path.exists(self.sjoin("journal")):
482 self.ui.status(_("rolling back interrupted transaction\n"))
482 self.ui.status(_("rolling back interrupted transaction\n"))
483 transaction.rollback(self.sopener, self.sjoin("journal"))
483 transaction.rollback(self.sopener, self.sjoin("journal"))
484 self.reload()
484 self.reload()
485 return True
485 return True
486 else:
486 else:
487 self.ui.warn(_("no interrupted transaction available\n"))
487 self.ui.warn(_("no interrupted transaction available\n"))
488 return False
488 return False
489
489
490 def rollback(self, wlock=None):
490 def rollback(self, wlock=None):
491 if not wlock:
491 if not wlock:
492 wlock = self.wlock()
492 wlock = self.wlock()
493 l = self.lock()
493 l = self.lock()
494 if os.path.exists(self.sjoin("undo")):
494 if os.path.exists(self.sjoin("undo")):
495 self.ui.status(_("rolling back last transaction\n"))
495 self.ui.status(_("rolling back last transaction\n"))
496 transaction.rollback(self.sopener, self.sjoin("undo"))
496 transaction.rollback(self.sopener, self.sjoin("undo"))
497 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
497 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
498 self.reload()
498 self.reload()
499 self.wreload()
499 self.wreload()
500 else:
500 else:
501 self.ui.warn(_("no rollback information available\n"))
501 self.ui.warn(_("no rollback information available\n"))
502
502
503 def wreload(self):
503 def wreload(self):
504 self.dirstate.read()
504 self.dirstate.read()
505
505
506 def reload(self):
506 def reload(self):
507 self.changelog.load()
507 self.changelog.load()
508 self.manifest.load()
508 self.manifest.load()
509 self.tagscache = None
509 self.tagscache = None
510 self.nodetagscache = None
510 self.nodetagscache = None
511
511
512 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
512 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
513 desc=None):
513 desc=None):
514 try:
514 try:
515 l = lock.lock(lockname, 0, releasefn, desc=desc)
515 l = lock.lock(lockname, 0, releasefn, desc=desc)
516 except lock.LockHeld, inst:
516 except lock.LockHeld, inst:
517 if not wait:
517 if not wait:
518 raise
518 raise
519 self.ui.warn(_("waiting for lock on %s held by %r\n") %
519 self.ui.warn(_("waiting for lock on %s held by %r\n") %
520 (desc, inst.locker))
520 (desc, inst.locker))
521 # default to 600 seconds timeout
521 # default to 600 seconds timeout
522 l = lock.lock(lockname, int(self.ui.config("ui", "timeout", "600")),
522 l = lock.lock(lockname, int(self.ui.config("ui", "timeout", "600")),
523 releasefn, desc=desc)
523 releasefn, desc=desc)
524 if acquirefn:
524 if acquirefn:
525 acquirefn()
525 acquirefn()
526 return l
526 return l
527
527
528 def lock(self, wait=1):
528 def lock(self, wait=1):
529 return self.do_lock(self.sjoin("lock"), wait, acquirefn=self.reload,
529 return self.do_lock(self.sjoin("lock"), wait, acquirefn=self.reload,
530 desc=_('repository %s') % self.origroot)
530 desc=_('repository %s') % self.origroot)
531
531
532 def wlock(self, wait=1):
532 def wlock(self, wait=1):
533 return self.do_lock(self.join("wlock"), wait, self.dirstate.write,
533 return self.do_lock(self.join("wlock"), wait, self.dirstate.write,
534 self.wreload,
534 self.wreload,
535 desc=_('working directory of %s') % self.origroot)
535 desc=_('working directory of %s') % self.origroot)
536
536
537 def filecommit(self, fn, manifest1, manifest2, linkrev, transaction, changelist):
537 def filecommit(self, fn, manifest1, manifest2, linkrev, transaction, changelist):
538 """
538 """
539 commit an individual file as part of a larger transaction
539 commit an individual file as part of a larger transaction
540 """
540 """
541
541
542 t = self.wread(fn)
542 t = self.wread(fn)
543 fl = self.file(fn)
543 fl = self.file(fn)
544 fp1 = manifest1.get(fn, nullid)
544 fp1 = manifest1.get(fn, nullid)
545 fp2 = manifest2.get(fn, nullid)
545 fp2 = manifest2.get(fn, nullid)
546
546
547 meta = {}
547 meta = {}
548 cp = self.dirstate.copied(fn)
548 cp = self.dirstate.copied(fn)
549 if cp:
549 if cp:
550 meta["copy"] = cp
550 meta["copy"] = cp
551 if not manifest2: # not a branch merge
551 if not manifest2: # not a branch merge
552 meta["copyrev"] = hex(manifest1.get(cp, nullid))
552 meta["copyrev"] = hex(manifest1.get(cp, nullid))
553 fp2 = nullid
553 fp2 = nullid
554 elif fp2 != nullid: # copied on remote side
554 elif fp2 != nullid: # copied on remote side
555 meta["copyrev"] = hex(manifest1.get(cp, nullid))
555 meta["copyrev"] = hex(manifest1.get(cp, nullid))
556 else: # copied on local side, reversed
556 else: # copied on local side, reversed
557 meta["copyrev"] = hex(manifest2.get(cp))
557 meta["copyrev"] = hex(manifest2.get(cp))
558 fp2 = nullid
558 fp2 = nullid
559 self.ui.debug(_(" %s: copy %s:%s\n") %
559 self.ui.debug(_(" %s: copy %s:%s\n") %
560 (fn, cp, meta["copyrev"]))
560 (fn, cp, meta["copyrev"]))
561 fp1 = nullid
561 fp1 = nullid
562 elif fp2 != nullid:
562 elif fp2 != nullid:
563 # is one parent an ancestor of the other?
563 # is one parent an ancestor of the other?
564 fpa = fl.ancestor(fp1, fp2)
564 fpa = fl.ancestor(fp1, fp2)
565 if fpa == fp1:
565 if fpa == fp1:
566 fp1, fp2 = fp2, nullid
566 fp1, fp2 = fp2, nullid
567 elif fpa == fp2:
567 elif fpa == fp2:
568 fp2 = nullid
568 fp2 = nullid
569
569
570 # is the file unmodified from the parent? report existing entry
570 # is the file unmodified from the parent? report existing entry
571 if fp2 == nullid and not fl.cmp(fp1, t):
571 if fp2 == nullid and not fl.cmp(fp1, t):
572 return fp1
572 return fp1
573
573
574 changelist.append(fn)
574 changelist.append(fn)
575 return fl.add(t, meta, transaction, linkrev, fp1, fp2)
575 return fl.add(t, meta, transaction, linkrev, fp1, fp2)
576
576
577 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
577 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
578 if p1 is None:
578 if p1 is None:
579 p1, p2 = self.dirstate.parents()
579 p1, p2 = self.dirstate.parents()
580 return self.commit(files=files, text=text, user=user, date=date,
580 return self.commit(files=files, text=text, user=user, date=date,
581 p1=p1, p2=p2, wlock=wlock)
581 p1=p1, p2=p2, wlock=wlock)
582
582
583 def commit(self, files=None, text="", user=None, date=None,
583 def commit(self, files=None, text="", user=None, date=None,
584 match=util.always, force=False, lock=None, wlock=None,
584 match=util.always, force=False, lock=None, wlock=None,
585 force_editor=False, p1=None, p2=None, extra={}):
585 force_editor=False, p1=None, p2=None, extra={}):
586
586
587 commit = []
587 commit = []
588 remove = []
588 remove = []
589 changed = []
589 changed = []
590 use_dirstate = (p1 is None) # not rawcommit
590 use_dirstate = (p1 is None) # not rawcommit
591 extra = extra.copy()
591 extra = extra.copy()
592
592
593 if use_dirstate:
593 if use_dirstate:
594 if files:
594 if files:
595 for f in files:
595 for f in files:
596 s = self.dirstate.state(f)
596 s = self.dirstate.state(f)
597 if s in 'nmai':
597 if s in 'nmai':
598 commit.append(f)
598 commit.append(f)
599 elif s == 'r':
599 elif s == 'r':
600 remove.append(f)
600 remove.append(f)
601 else:
601 else:
602 self.ui.warn(_("%s not tracked!\n") % f)
602 self.ui.warn(_("%s not tracked!\n") % f)
603 else:
603 else:
604 changes = self.status(match=match)[:5]
604 changes = self.status(match=match)[:5]
605 modified, added, removed, deleted, unknown = changes
605 modified, added, removed, deleted, unknown = changes
606 commit = modified + added
606 commit = modified + added
607 remove = removed
607 remove = removed
608 else:
608 else:
609 commit = files
609 commit = files
610
610
611 if use_dirstate:
611 if use_dirstate:
612 p1, p2 = self.dirstate.parents()
612 p1, p2 = self.dirstate.parents()
613 update_dirstate = True
613 update_dirstate = True
614 else:
614 else:
615 p1, p2 = p1, p2 or nullid
615 p1, p2 = p1, p2 or nullid
616 update_dirstate = (self.dirstate.parents()[0] == p1)
616 update_dirstate = (self.dirstate.parents()[0] == p1)
617
617
618 c1 = self.changelog.read(p1)
618 c1 = self.changelog.read(p1)
619 c2 = self.changelog.read(p2)
619 c2 = self.changelog.read(p2)
620 m1 = self.manifest.read(c1[0]).copy()
620 m1 = self.manifest.read(c1[0]).copy()
621 m2 = self.manifest.read(c2[0])
621 m2 = self.manifest.read(c2[0])
622
622
623 if use_dirstate:
623 if use_dirstate:
624 branchname = self.workingctx().branch()
624 branchname = self.workingctx().branch()
625 else:
625 else:
626 branchname = ""
626 branchname = ""
627
627
628 if use_dirstate:
628 if use_dirstate:
629 oldname = c1[5].get("branch", "")
629 oldname = c1[5].get("branch", "")
630 if not commit and not remove and not force and p2 == nullid and \
630 if not commit and not remove and not force and p2 == nullid and \
631 branchname == oldname:
631 branchname == oldname:
632 self.ui.status(_("nothing changed\n"))
632 self.ui.status(_("nothing changed\n"))
633 return None
633 return None
634
634
635 xp1 = hex(p1)
635 xp1 = hex(p1)
636 if p2 == nullid: xp2 = ''
636 if p2 == nullid: xp2 = ''
637 else: xp2 = hex(p2)
637 else: xp2 = hex(p2)
638
638
639 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
639 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
640
640
641 if not wlock:
641 if not wlock:
642 wlock = self.wlock()
642 wlock = self.wlock()
643 if not lock:
643 if not lock:
644 lock = self.lock()
644 lock = self.lock()
645 tr = self.transaction()
645 tr = self.transaction()
646
646
647 # check in files
647 # check in files
648 new = {}
648 new = {}
649 linkrev = self.changelog.count()
649 linkrev = self.changelog.count()
650 commit.sort()
650 commit.sort()
651 for f in commit:
651 for f in commit:
652 self.ui.note(f + "\n")
652 self.ui.note(f + "\n")
653 try:
653 try:
654 new[f] = self.filecommit(f, m1, m2, linkrev, tr, changed)
654 new[f] = self.filecommit(f, m1, m2, linkrev, tr, changed)
655 m1.set(f, util.is_exec(self.wjoin(f), m1.execf(f)))
655 m1.set(f, util.is_exec(self.wjoin(f), m1.execf(f)))
656 except IOError:
656 except IOError:
657 if use_dirstate:
657 if use_dirstate:
658 self.ui.warn(_("trouble committing %s!\n") % f)
658 self.ui.warn(_("trouble committing %s!\n") % f)
659 raise
659 raise
660 else:
660 else:
661 remove.append(f)
661 remove.append(f)
662
662
663 # update manifest
663 # update manifest
664 m1.update(new)
664 m1.update(new)
665 remove.sort()
665 remove.sort()
666
666
667 for f in remove:
667 for f in remove:
668 if f in m1:
668 if f in m1:
669 del m1[f]
669 del m1[f]
670 mn = self.manifest.add(m1, tr, linkrev, c1[0], c2[0], (new, remove))
670 mn = self.manifest.add(m1, tr, linkrev, c1[0], c2[0], (new, remove))
671
671
672 # add changeset
672 # add changeset
673 new = new.keys()
673 new = new.keys()
674 new.sort()
674 new.sort()
675
675
676 user = user or self.ui.username()
676 user = user or self.ui.username()
677 if not text or force_editor:
677 if not text or force_editor:
678 edittext = []
678 edittext = []
679 if text:
679 if text:
680 edittext.append(text)
680 edittext.append(text)
681 edittext.append("")
681 edittext.append("")
682 edittext.append("HG: user: %s" % user)
682 if p2 != nullid:
683 if p2 != nullid:
683 edittext.append("HG: branch merge")
684 edittext.append("HG: branch merge")
684 edittext.extend(["HG: changed %s" % f for f in changed])
685 edittext.extend(["HG: changed %s" % f for f in changed])
685 edittext.extend(["HG: removed %s" % f for f in remove])
686 edittext.extend(["HG: removed %s" % f for f in remove])
686 if not changed and not remove:
687 if not changed and not remove:
687 edittext.append("HG: no files changed")
688 edittext.append("HG: no files changed")
688 edittext.append("")
689 edittext.append("")
689 # run editor in the repository root
690 # run editor in the repository root
690 olddir = os.getcwd()
691 olddir = os.getcwd()
691 os.chdir(self.root)
692 os.chdir(self.root)
692 text = self.ui.edit("\n".join(edittext), user)
693 text = self.ui.edit("\n".join(edittext), user)
693 os.chdir(olddir)
694 os.chdir(olddir)
694
695
695 lines = [line.rstrip() for line in text.rstrip().splitlines()]
696 lines = [line.rstrip() for line in text.rstrip().splitlines()]
696 while lines and not lines[0]:
697 while lines and not lines[0]:
697 del lines[0]
698 del lines[0]
698 if not lines:
699 if not lines:
699 return None
700 return None
700 text = '\n'.join(lines)
701 text = '\n'.join(lines)
701 if branchname:
702 if branchname:
702 extra["branch"] = branchname
703 extra["branch"] = branchname
703 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2,
704 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2,
704 user, date, extra)
705 user, date, extra)
705 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
706 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
706 parent2=xp2)
707 parent2=xp2)
707 tr.close()
708 tr.close()
708
709
709 if use_dirstate or update_dirstate:
710 if use_dirstate or update_dirstate:
710 self.dirstate.setparents(n)
711 self.dirstate.setparents(n)
711 if use_dirstate:
712 if use_dirstate:
712 self.dirstate.update(new, "n")
713 self.dirstate.update(new, "n")
713 self.dirstate.forget(remove)
714 self.dirstate.forget(remove)
714
715
715 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
716 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
716 return n
717 return n
717
718
718 def walk(self, node=None, files=[], match=util.always, badmatch=None):
719 def walk(self, node=None, files=[], match=util.always, badmatch=None):
719 '''
720 '''
720 walk recursively through the directory tree or a given
721 walk recursively through the directory tree or a given
721 changeset, finding all files matched by the match
722 changeset, finding all files matched by the match
722 function
723 function
723
724
724 results are yielded in a tuple (src, filename), where src
725 results are yielded in a tuple (src, filename), where src
725 is one of:
726 is one of:
726 'f' the file was found in the directory tree
727 'f' the file was found in the directory tree
727 'm' the file was only in the dirstate and not in the tree
728 'm' the file was only in the dirstate and not in the tree
728 'b' file was not found and matched badmatch
729 'b' file was not found and matched badmatch
729 '''
730 '''
730
731
731 if node:
732 if node:
732 fdict = dict.fromkeys(files)
733 fdict = dict.fromkeys(files)
733 for fn in self.manifest.read(self.changelog.read(node)[0]):
734 for fn in self.manifest.read(self.changelog.read(node)[0]):
734 for ffn in fdict:
735 for ffn in fdict:
735 # match if the file is the exact name or a directory
736 # match if the file is the exact name or a directory
736 if ffn == fn or fn.startswith("%s/" % ffn):
737 if ffn == fn or fn.startswith("%s/" % ffn):
737 del fdict[ffn]
738 del fdict[ffn]
738 break
739 break
739 if match(fn):
740 if match(fn):
740 yield 'm', fn
741 yield 'm', fn
741 for fn in fdict:
742 for fn in fdict:
742 if badmatch and badmatch(fn):
743 if badmatch and badmatch(fn):
743 if match(fn):
744 if match(fn):
744 yield 'b', fn
745 yield 'b', fn
745 else:
746 else:
746 self.ui.warn(_('%s: No such file in rev %s\n') % (
747 self.ui.warn(_('%s: No such file in rev %s\n') % (
747 util.pathto(self.getcwd(), fn), short(node)))
748 util.pathto(self.getcwd(), fn), short(node)))
748 else:
749 else:
749 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
750 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
750 yield src, fn
751 yield src, fn
751
752
752 def status(self, node1=None, node2=None, files=[], match=util.always,
753 def status(self, node1=None, node2=None, files=[], match=util.always,
753 wlock=None, list_ignored=False, list_clean=False):
754 wlock=None, list_ignored=False, list_clean=False):
754 """return status of files between two nodes or node and working directory
755 """return status of files between two nodes or node and working directory
755
756
756 If node1 is None, use the first dirstate parent instead.
757 If node1 is None, use the first dirstate parent instead.
757 If node2 is None, compare node1 with working directory.
758 If node2 is None, compare node1 with working directory.
758 """
759 """
759
760
760 def fcmp(fn, mf):
761 def fcmp(fn, mf):
761 t1 = self.wread(fn)
762 t1 = self.wread(fn)
762 return self.file(fn).cmp(mf.get(fn, nullid), t1)
763 return self.file(fn).cmp(mf.get(fn, nullid), t1)
763
764
764 def mfmatches(node):
765 def mfmatches(node):
765 change = self.changelog.read(node)
766 change = self.changelog.read(node)
766 mf = self.manifest.read(change[0]).copy()
767 mf = self.manifest.read(change[0]).copy()
767 for fn in mf.keys():
768 for fn in mf.keys():
768 if not match(fn):
769 if not match(fn):
769 del mf[fn]
770 del mf[fn]
770 return mf
771 return mf
771
772
772 modified, added, removed, deleted, unknown = [], [], [], [], []
773 modified, added, removed, deleted, unknown = [], [], [], [], []
773 ignored, clean = [], []
774 ignored, clean = [], []
774
775
775 compareworking = False
776 compareworking = False
776 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
777 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
777 compareworking = True
778 compareworking = True
778
779
779 if not compareworking:
780 if not compareworking:
780 # read the manifest from node1 before the manifest from node2,
781 # read the manifest from node1 before the manifest from node2,
781 # so that we'll hit the manifest cache if we're going through
782 # so that we'll hit the manifest cache if we're going through
782 # all the revisions in parent->child order.
783 # all the revisions in parent->child order.
783 mf1 = mfmatches(node1)
784 mf1 = mfmatches(node1)
784
785
785 # are we comparing the working directory?
786 # are we comparing the working directory?
786 if not node2:
787 if not node2:
787 if not wlock:
788 if not wlock:
788 try:
789 try:
789 wlock = self.wlock(wait=0)
790 wlock = self.wlock(wait=0)
790 except lock.LockException:
791 except lock.LockException:
791 wlock = None
792 wlock = None
792 (lookup, modified, added, removed, deleted, unknown,
793 (lookup, modified, added, removed, deleted, unknown,
793 ignored, clean) = self.dirstate.status(files, match,
794 ignored, clean) = self.dirstate.status(files, match,
794 list_ignored, list_clean)
795 list_ignored, list_clean)
795
796
796 # are we comparing working dir against its parent?
797 # are we comparing working dir against its parent?
797 if compareworking:
798 if compareworking:
798 if lookup:
799 if lookup:
799 # do a full compare of any files that might have changed
800 # do a full compare of any files that might have changed
800 mf2 = mfmatches(self.dirstate.parents()[0])
801 mf2 = mfmatches(self.dirstate.parents()[0])
801 for f in lookup:
802 for f in lookup:
802 if fcmp(f, mf2):
803 if fcmp(f, mf2):
803 modified.append(f)
804 modified.append(f)
804 else:
805 else:
805 clean.append(f)
806 clean.append(f)
806 if wlock is not None:
807 if wlock is not None:
807 self.dirstate.update([f], "n")
808 self.dirstate.update([f], "n")
808 else:
809 else:
809 # we are comparing working dir against non-parent
810 # we are comparing working dir against non-parent
810 # generate a pseudo-manifest for the working dir
811 # generate a pseudo-manifest for the working dir
811 # XXX: create it in dirstate.py ?
812 # XXX: create it in dirstate.py ?
812 mf2 = mfmatches(self.dirstate.parents()[0])
813 mf2 = mfmatches(self.dirstate.parents()[0])
813 for f in lookup + modified + added:
814 for f in lookup + modified + added:
814 mf2[f] = ""
815 mf2[f] = ""
815 mf2.set(f, execf=util.is_exec(self.wjoin(f), mf2.execf(f)))
816 mf2.set(f, execf=util.is_exec(self.wjoin(f), mf2.execf(f)))
816 for f in removed:
817 for f in removed:
817 if f in mf2:
818 if f in mf2:
818 del mf2[f]
819 del mf2[f]
819 else:
820 else:
820 # we are comparing two revisions
821 # we are comparing two revisions
821 mf2 = mfmatches(node2)
822 mf2 = mfmatches(node2)
822
823
823 if not compareworking:
824 if not compareworking:
824 # flush lists from dirstate before comparing manifests
825 # flush lists from dirstate before comparing manifests
825 modified, added, clean = [], [], []
826 modified, added, clean = [], [], []
826
827
827 # make sure to sort the files so we talk to the disk in a
828 # make sure to sort the files so we talk to the disk in a
828 # reasonable order
829 # reasonable order
829 mf2keys = mf2.keys()
830 mf2keys = mf2.keys()
830 mf2keys.sort()
831 mf2keys.sort()
831 for fn in mf2keys:
832 for fn in mf2keys:
832 if mf1.has_key(fn):
833 if mf1.has_key(fn):
833 if mf1.flags(fn) != mf2.flags(fn) or \
834 if mf1.flags(fn) != mf2.flags(fn) or \
834 (mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1))):
835 (mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1))):
835 modified.append(fn)
836 modified.append(fn)
836 elif list_clean:
837 elif list_clean:
837 clean.append(fn)
838 clean.append(fn)
838 del mf1[fn]
839 del mf1[fn]
839 else:
840 else:
840 added.append(fn)
841 added.append(fn)
841
842
842 removed = mf1.keys()
843 removed = mf1.keys()
843
844
844 # sort and return results:
845 # sort and return results:
845 for l in modified, added, removed, deleted, unknown, ignored, clean:
846 for l in modified, added, removed, deleted, unknown, ignored, clean:
846 l.sort()
847 l.sort()
847 return (modified, added, removed, deleted, unknown, ignored, clean)
848 return (modified, added, removed, deleted, unknown, ignored, clean)
848
849
849 def add(self, list, wlock=None):
850 def add(self, list, wlock=None):
850 if not wlock:
851 if not wlock:
851 wlock = self.wlock()
852 wlock = self.wlock()
852 for f in list:
853 for f in list:
853 p = self.wjoin(f)
854 p = self.wjoin(f)
854 if not os.path.exists(p):
855 if not os.path.exists(p):
855 self.ui.warn(_("%s does not exist!\n") % f)
856 self.ui.warn(_("%s does not exist!\n") % f)
856 elif not os.path.isfile(p):
857 elif not os.path.isfile(p):
857 self.ui.warn(_("%s not added: only files supported currently\n")
858 self.ui.warn(_("%s not added: only files supported currently\n")
858 % f)
859 % f)
859 elif self.dirstate.state(f) in 'an':
860 elif self.dirstate.state(f) in 'an':
860 self.ui.warn(_("%s already tracked!\n") % f)
861 self.ui.warn(_("%s already tracked!\n") % f)
861 else:
862 else:
862 self.dirstate.update([f], "a")
863 self.dirstate.update([f], "a")
863
864
864 def forget(self, list, wlock=None):
865 def forget(self, list, wlock=None):
865 if not wlock:
866 if not wlock:
866 wlock = self.wlock()
867 wlock = self.wlock()
867 for f in list:
868 for f in list:
868 if self.dirstate.state(f) not in 'ai':
869 if self.dirstate.state(f) not in 'ai':
869 self.ui.warn(_("%s not added!\n") % f)
870 self.ui.warn(_("%s not added!\n") % f)
870 else:
871 else:
871 self.dirstate.forget([f])
872 self.dirstate.forget([f])
872
873
873 def remove(self, list, unlink=False, wlock=None):
874 def remove(self, list, unlink=False, wlock=None):
874 if unlink:
875 if unlink:
875 for f in list:
876 for f in list:
876 try:
877 try:
877 util.unlink(self.wjoin(f))
878 util.unlink(self.wjoin(f))
878 except OSError, inst:
879 except OSError, inst:
879 if inst.errno != errno.ENOENT:
880 if inst.errno != errno.ENOENT:
880 raise
881 raise
881 if not wlock:
882 if not wlock:
882 wlock = self.wlock()
883 wlock = self.wlock()
883 for f in list:
884 for f in list:
884 p = self.wjoin(f)
885 p = self.wjoin(f)
885 if os.path.exists(p):
886 if os.path.exists(p):
886 self.ui.warn(_("%s still exists!\n") % f)
887 self.ui.warn(_("%s still exists!\n") % f)
887 elif self.dirstate.state(f) == 'a':
888 elif self.dirstate.state(f) == 'a':
888 self.dirstate.forget([f])
889 self.dirstate.forget([f])
889 elif f not in self.dirstate:
890 elif f not in self.dirstate:
890 self.ui.warn(_("%s not tracked!\n") % f)
891 self.ui.warn(_("%s not tracked!\n") % f)
891 else:
892 else:
892 self.dirstate.update([f], "r")
893 self.dirstate.update([f], "r")
893
894
894 def undelete(self, list, wlock=None):
895 def undelete(self, list, wlock=None):
895 p = self.dirstate.parents()[0]
896 p = self.dirstate.parents()[0]
896 mn = self.changelog.read(p)[0]
897 mn = self.changelog.read(p)[0]
897 m = self.manifest.read(mn)
898 m = self.manifest.read(mn)
898 if not wlock:
899 if not wlock:
899 wlock = self.wlock()
900 wlock = self.wlock()
900 for f in list:
901 for f in list:
901 if self.dirstate.state(f) not in "r":
902 if self.dirstate.state(f) not in "r":
902 self.ui.warn("%s not removed!\n" % f)
903 self.ui.warn("%s not removed!\n" % f)
903 else:
904 else:
904 t = self.file(f).read(m[f])
905 t = self.file(f).read(m[f])
905 self.wwrite(f, t)
906 self.wwrite(f, t)
906 util.set_exec(self.wjoin(f), m.execf(f))
907 util.set_exec(self.wjoin(f), m.execf(f))
907 self.dirstate.update([f], "n")
908 self.dirstate.update([f], "n")
908
909
909 def copy(self, source, dest, wlock=None):
910 def copy(self, source, dest, wlock=None):
910 p = self.wjoin(dest)
911 p = self.wjoin(dest)
911 if not os.path.exists(p):
912 if not os.path.exists(p):
912 self.ui.warn(_("%s does not exist!\n") % dest)
913 self.ui.warn(_("%s does not exist!\n") % dest)
913 elif not os.path.isfile(p):
914 elif not os.path.isfile(p):
914 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
915 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
915 else:
916 else:
916 if not wlock:
917 if not wlock:
917 wlock = self.wlock()
918 wlock = self.wlock()
918 if self.dirstate.state(dest) == '?':
919 if self.dirstate.state(dest) == '?':
919 self.dirstate.update([dest], "a")
920 self.dirstate.update([dest], "a")
920 self.dirstate.copy(source, dest)
921 self.dirstate.copy(source, dest)
921
922
922 def heads(self, start=None):
923 def heads(self, start=None):
923 heads = self.changelog.heads(start)
924 heads = self.changelog.heads(start)
924 # sort the output in rev descending order
925 # sort the output in rev descending order
925 heads = [(-self.changelog.rev(h), h) for h in heads]
926 heads = [(-self.changelog.rev(h), h) for h in heads]
926 heads.sort()
927 heads.sort()
927 return [n for (r, n) in heads]
928 return [n for (r, n) in heads]
928
929
929 # branchlookup returns a dict giving a list of branches for
930 # branchlookup returns a dict giving a list of branches for
930 # each head. A branch is defined as the tag of a node or
931 # each head. A branch is defined as the tag of a node or
931 # the branch of the node's parents. If a node has multiple
932 # the branch of the node's parents. If a node has multiple
932 # branch tags, tags are eliminated if they are visible from other
933 # branch tags, tags are eliminated if they are visible from other
933 # branch tags.
934 # branch tags.
934 #
935 #
935 # So, for this graph: a->b->c->d->e
936 # So, for this graph: a->b->c->d->e
936 # \ /
937 # \ /
937 # aa -----/
938 # aa -----/
938 # a has tag 2.6.12
939 # a has tag 2.6.12
939 # d has tag 2.6.13
940 # d has tag 2.6.13
940 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
941 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
941 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
942 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
942 # from the list.
943 # from the list.
943 #
944 #
944 # It is possible that more than one head will have the same branch tag.
945 # It is possible that more than one head will have the same branch tag.
945 # callers need to check the result for multiple heads under the same
946 # callers need to check the result for multiple heads under the same
946 # branch tag if that is a problem for them (ie checkout of a specific
947 # branch tag if that is a problem for them (ie checkout of a specific
947 # branch).
948 # branch).
948 #
949 #
949 # passing in a specific branch will limit the depth of the search
950 # passing in a specific branch will limit the depth of the search
950 # through the parents. It won't limit the branches returned in the
951 # through the parents. It won't limit the branches returned in the
951 # result though.
952 # result though.
952 def branchlookup(self, heads=None, branch=None):
953 def branchlookup(self, heads=None, branch=None):
953 if not heads:
954 if not heads:
954 heads = self.heads()
955 heads = self.heads()
955 headt = [ h for h in heads ]
956 headt = [ h for h in heads ]
956 chlog = self.changelog
957 chlog = self.changelog
957 branches = {}
958 branches = {}
958 merges = []
959 merges = []
959 seenmerge = {}
960 seenmerge = {}
960
961
961 # traverse the tree once for each head, recording in the branches
962 # traverse the tree once for each head, recording in the branches
962 # dict which tags are visible from this head. The branches
963 # dict which tags are visible from this head. The branches
963 # dict also records which tags are visible from each tag
964 # dict also records which tags are visible from each tag
964 # while we traverse.
965 # while we traverse.
965 while headt or merges:
966 while headt or merges:
966 if merges:
967 if merges:
967 n, found = merges.pop()
968 n, found = merges.pop()
968 visit = [n]
969 visit = [n]
969 else:
970 else:
970 h = headt.pop()
971 h = headt.pop()
971 visit = [h]
972 visit = [h]
972 found = [h]
973 found = [h]
973 seen = {}
974 seen = {}
974 while visit:
975 while visit:
975 n = visit.pop()
976 n = visit.pop()
976 if n in seen:
977 if n in seen:
977 continue
978 continue
978 pp = chlog.parents(n)
979 pp = chlog.parents(n)
979 tags = self.nodetags(n)
980 tags = self.nodetags(n)
980 if tags:
981 if tags:
981 for x in tags:
982 for x in tags:
982 if x == 'tip':
983 if x == 'tip':
983 continue
984 continue
984 for f in found:
985 for f in found:
985 branches.setdefault(f, {})[n] = 1
986 branches.setdefault(f, {})[n] = 1
986 branches.setdefault(n, {})[n] = 1
987 branches.setdefault(n, {})[n] = 1
987 break
988 break
988 if n not in found:
989 if n not in found:
989 found.append(n)
990 found.append(n)
990 if branch in tags:
991 if branch in tags:
991 continue
992 continue
992 seen[n] = 1
993 seen[n] = 1
993 if pp[1] != nullid and n not in seenmerge:
994 if pp[1] != nullid and n not in seenmerge:
994 merges.append((pp[1], [x for x in found]))
995 merges.append((pp[1], [x for x in found]))
995 seenmerge[n] = 1
996 seenmerge[n] = 1
996 if pp[0] != nullid:
997 if pp[0] != nullid:
997 visit.append(pp[0])
998 visit.append(pp[0])
998 # traverse the branches dict, eliminating branch tags from each
999 # traverse the branches dict, eliminating branch tags from each
999 # head that are visible from another branch tag for that head.
1000 # head that are visible from another branch tag for that head.
1000 out = {}
1001 out = {}
1001 viscache = {}
1002 viscache = {}
1002 for h in heads:
1003 for h in heads:
1003 def visible(node):
1004 def visible(node):
1004 if node in viscache:
1005 if node in viscache:
1005 return viscache[node]
1006 return viscache[node]
1006 ret = {}
1007 ret = {}
1007 visit = [node]
1008 visit = [node]
1008 while visit:
1009 while visit:
1009 x = visit.pop()
1010 x = visit.pop()
1010 if x in viscache:
1011 if x in viscache:
1011 ret.update(viscache[x])
1012 ret.update(viscache[x])
1012 elif x not in ret:
1013 elif x not in ret:
1013 ret[x] = 1
1014 ret[x] = 1
1014 if x in branches:
1015 if x in branches:
1015 visit[len(visit):] = branches[x].keys()
1016 visit[len(visit):] = branches[x].keys()
1016 viscache[node] = ret
1017 viscache[node] = ret
1017 return ret
1018 return ret
1018 if h not in branches:
1019 if h not in branches:
1019 continue
1020 continue
1020 # O(n^2), but somewhat limited. This only searches the
1021 # O(n^2), but somewhat limited. This only searches the
1021 # tags visible from a specific head, not all the tags in the
1022 # tags visible from a specific head, not all the tags in the
1022 # whole repo.
1023 # whole repo.
1023 for b in branches[h]:
1024 for b in branches[h]:
1024 vis = False
1025 vis = False
1025 for bb in branches[h].keys():
1026 for bb in branches[h].keys():
1026 if b != bb:
1027 if b != bb:
1027 if b in visible(bb):
1028 if b in visible(bb):
1028 vis = True
1029 vis = True
1029 break
1030 break
1030 if not vis:
1031 if not vis:
1031 l = out.setdefault(h, [])
1032 l = out.setdefault(h, [])
1032 l[len(l):] = self.nodetags(b)
1033 l[len(l):] = self.nodetags(b)
1033 return out
1034 return out
1034
1035
1035 def branches(self, nodes):
1036 def branches(self, nodes):
1036 if not nodes:
1037 if not nodes:
1037 nodes = [self.changelog.tip()]
1038 nodes = [self.changelog.tip()]
1038 b = []
1039 b = []
1039 for n in nodes:
1040 for n in nodes:
1040 t = n
1041 t = n
1041 while 1:
1042 while 1:
1042 p = self.changelog.parents(n)
1043 p = self.changelog.parents(n)
1043 if p[1] != nullid or p[0] == nullid:
1044 if p[1] != nullid or p[0] == nullid:
1044 b.append((t, n, p[0], p[1]))
1045 b.append((t, n, p[0], p[1]))
1045 break
1046 break
1046 n = p[0]
1047 n = p[0]
1047 return b
1048 return b
1048
1049
1049 def between(self, pairs):
1050 def between(self, pairs):
1050 r = []
1051 r = []
1051
1052
1052 for top, bottom in pairs:
1053 for top, bottom in pairs:
1053 n, l, i = top, [], 0
1054 n, l, i = top, [], 0
1054 f = 1
1055 f = 1
1055
1056
1056 while n != bottom:
1057 while n != bottom:
1057 p = self.changelog.parents(n)[0]
1058 p = self.changelog.parents(n)[0]
1058 if i == f:
1059 if i == f:
1059 l.append(n)
1060 l.append(n)
1060 f = f * 2
1061 f = f * 2
1061 n = p
1062 n = p
1062 i += 1
1063 i += 1
1063
1064
1064 r.append(l)
1065 r.append(l)
1065
1066
1066 return r
1067 return r
1067
1068
1068 def findincoming(self, remote, base=None, heads=None, force=False):
1069 def findincoming(self, remote, base=None, heads=None, force=False):
1069 """Return list of roots of the subsets of missing nodes from remote
1070 """Return list of roots of the subsets of missing nodes from remote
1070
1071
1071 If base dict is specified, assume that these nodes and their parents
1072 If base dict is specified, assume that these nodes and their parents
1072 exist on the remote side and that no child of a node of base exists
1073 exist on the remote side and that no child of a node of base exists
1073 in both remote and self.
1074 in both remote and self.
1074 Furthermore base will be updated to include the nodes that exists
1075 Furthermore base will be updated to include the nodes that exists
1075 in self and remote but no children exists in self and remote.
1076 in self and remote but no children exists in self and remote.
1076 If a list of heads is specified, return only nodes which are heads
1077 If a list of heads is specified, return only nodes which are heads
1077 or ancestors of these heads.
1078 or ancestors of these heads.
1078
1079
1079 All the ancestors of base are in self and in remote.
1080 All the ancestors of base are in self and in remote.
1080 All the descendants of the list returned are missing in self.
1081 All the descendants of the list returned are missing in self.
1081 (and so we know that the rest of the nodes are missing in remote, see
1082 (and so we know that the rest of the nodes are missing in remote, see
1082 outgoing)
1083 outgoing)
1083 """
1084 """
1084 m = self.changelog.nodemap
1085 m = self.changelog.nodemap
1085 search = []
1086 search = []
1086 fetch = {}
1087 fetch = {}
1087 seen = {}
1088 seen = {}
1088 seenbranch = {}
1089 seenbranch = {}
1089 if base == None:
1090 if base == None:
1090 base = {}
1091 base = {}
1091
1092
1092 if not heads:
1093 if not heads:
1093 heads = remote.heads()
1094 heads = remote.heads()
1094
1095
1095 if self.changelog.tip() == nullid:
1096 if self.changelog.tip() == nullid:
1096 base[nullid] = 1
1097 base[nullid] = 1
1097 if heads != [nullid]:
1098 if heads != [nullid]:
1098 return [nullid]
1099 return [nullid]
1099 return []
1100 return []
1100
1101
1101 # assume we're closer to the tip than the root
1102 # assume we're closer to the tip than the root
1102 # and start by examining the heads
1103 # and start by examining the heads
1103 self.ui.status(_("searching for changes\n"))
1104 self.ui.status(_("searching for changes\n"))
1104
1105
1105 unknown = []
1106 unknown = []
1106 for h in heads:
1107 for h in heads:
1107 if h not in m:
1108 if h not in m:
1108 unknown.append(h)
1109 unknown.append(h)
1109 else:
1110 else:
1110 base[h] = 1
1111 base[h] = 1
1111
1112
1112 if not unknown:
1113 if not unknown:
1113 return []
1114 return []
1114
1115
1115 req = dict.fromkeys(unknown)
1116 req = dict.fromkeys(unknown)
1116 reqcnt = 0
1117 reqcnt = 0
1117
1118
1118 # search through remote branches
1119 # search through remote branches
1119 # a 'branch' here is a linear segment of history, with four parts:
1120 # a 'branch' here is a linear segment of history, with four parts:
1120 # head, root, first parent, second parent
1121 # head, root, first parent, second parent
1121 # (a branch always has two parents (or none) by definition)
1122 # (a branch always has two parents (or none) by definition)
1122 unknown = remote.branches(unknown)
1123 unknown = remote.branches(unknown)
1123 while unknown:
1124 while unknown:
1124 r = []
1125 r = []
1125 while unknown:
1126 while unknown:
1126 n = unknown.pop(0)
1127 n = unknown.pop(0)
1127 if n[0] in seen:
1128 if n[0] in seen:
1128 continue
1129 continue
1129
1130
1130 self.ui.debug(_("examining %s:%s\n")
1131 self.ui.debug(_("examining %s:%s\n")
1131 % (short(n[0]), short(n[1])))
1132 % (short(n[0]), short(n[1])))
1132 if n[0] == nullid: # found the end of the branch
1133 if n[0] == nullid: # found the end of the branch
1133 pass
1134 pass
1134 elif n in seenbranch:
1135 elif n in seenbranch:
1135 self.ui.debug(_("branch already found\n"))
1136 self.ui.debug(_("branch already found\n"))
1136 continue
1137 continue
1137 elif n[1] and n[1] in m: # do we know the base?
1138 elif n[1] and n[1] in m: # do we know the base?
1138 self.ui.debug(_("found incomplete branch %s:%s\n")
1139 self.ui.debug(_("found incomplete branch %s:%s\n")
1139 % (short(n[0]), short(n[1])))
1140 % (short(n[0]), short(n[1])))
1140 search.append(n) # schedule branch range for scanning
1141 search.append(n) # schedule branch range for scanning
1141 seenbranch[n] = 1
1142 seenbranch[n] = 1
1142 else:
1143 else:
1143 if n[1] not in seen and n[1] not in fetch:
1144 if n[1] not in seen and n[1] not in fetch:
1144 if n[2] in m and n[3] in m:
1145 if n[2] in m and n[3] in m:
1145 self.ui.debug(_("found new changeset %s\n") %
1146 self.ui.debug(_("found new changeset %s\n") %
1146 short(n[1]))
1147 short(n[1]))
1147 fetch[n[1]] = 1 # earliest unknown
1148 fetch[n[1]] = 1 # earliest unknown
1148 for p in n[2:4]:
1149 for p in n[2:4]:
1149 if p in m:
1150 if p in m:
1150 base[p] = 1 # latest known
1151 base[p] = 1 # latest known
1151
1152
1152 for p in n[2:4]:
1153 for p in n[2:4]:
1153 if p not in req and p not in m:
1154 if p not in req and p not in m:
1154 r.append(p)
1155 r.append(p)
1155 req[p] = 1
1156 req[p] = 1
1156 seen[n[0]] = 1
1157 seen[n[0]] = 1
1157
1158
1158 if r:
1159 if r:
1159 reqcnt += 1
1160 reqcnt += 1
1160 self.ui.debug(_("request %d: %s\n") %
1161 self.ui.debug(_("request %d: %s\n") %
1161 (reqcnt, " ".join(map(short, r))))
1162 (reqcnt, " ".join(map(short, r))))
1162 for p in xrange(0, len(r), 10):
1163 for p in xrange(0, len(r), 10):
1163 for b in remote.branches(r[p:p+10]):
1164 for b in remote.branches(r[p:p+10]):
1164 self.ui.debug(_("received %s:%s\n") %
1165 self.ui.debug(_("received %s:%s\n") %
1165 (short(b[0]), short(b[1])))
1166 (short(b[0]), short(b[1])))
1166 unknown.append(b)
1167 unknown.append(b)
1167
1168
1168 # do binary search on the branches we found
1169 # do binary search on the branches we found
1169 while search:
1170 while search:
1170 n = search.pop(0)
1171 n = search.pop(0)
1171 reqcnt += 1
1172 reqcnt += 1
1172 l = remote.between([(n[0], n[1])])[0]
1173 l = remote.between([(n[0], n[1])])[0]
1173 l.append(n[1])
1174 l.append(n[1])
1174 p = n[0]
1175 p = n[0]
1175 f = 1
1176 f = 1
1176 for i in l:
1177 for i in l:
1177 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1178 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1178 if i in m:
1179 if i in m:
1179 if f <= 2:
1180 if f <= 2:
1180 self.ui.debug(_("found new branch changeset %s\n") %
1181 self.ui.debug(_("found new branch changeset %s\n") %
1181 short(p))
1182 short(p))
1182 fetch[p] = 1
1183 fetch[p] = 1
1183 base[i] = 1
1184 base[i] = 1
1184 else:
1185 else:
1185 self.ui.debug(_("narrowed branch search to %s:%s\n")
1186 self.ui.debug(_("narrowed branch search to %s:%s\n")
1186 % (short(p), short(i)))
1187 % (short(p), short(i)))
1187 search.append((p, i))
1188 search.append((p, i))
1188 break
1189 break
1189 p, f = i, f * 2
1190 p, f = i, f * 2
1190
1191
1191 # sanity check our fetch list
1192 # sanity check our fetch list
1192 for f in fetch.keys():
1193 for f in fetch.keys():
1193 if f in m:
1194 if f in m:
1194 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1195 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1195
1196
1196 if base.keys() == [nullid]:
1197 if base.keys() == [nullid]:
1197 if force:
1198 if force:
1198 self.ui.warn(_("warning: repository is unrelated\n"))
1199 self.ui.warn(_("warning: repository is unrelated\n"))
1199 else:
1200 else:
1200 raise util.Abort(_("repository is unrelated"))
1201 raise util.Abort(_("repository is unrelated"))
1201
1202
1202 self.ui.debug(_("found new changesets starting at ") +
1203 self.ui.debug(_("found new changesets starting at ") +
1203 " ".join([short(f) for f in fetch]) + "\n")
1204 " ".join([short(f) for f in fetch]) + "\n")
1204
1205
1205 self.ui.debug(_("%d total queries\n") % reqcnt)
1206 self.ui.debug(_("%d total queries\n") % reqcnt)
1206
1207
1207 return fetch.keys()
1208 return fetch.keys()
1208
1209
1209 def findoutgoing(self, remote, base=None, heads=None, force=False):
1210 def findoutgoing(self, remote, base=None, heads=None, force=False):
1210 """Return list of nodes that are roots of subsets not in remote
1211 """Return list of nodes that are roots of subsets not in remote
1211
1212
1212 If base dict is specified, assume that these nodes and their parents
1213 If base dict is specified, assume that these nodes and their parents
1213 exist on the remote side.
1214 exist on the remote side.
1214 If a list of heads is specified, return only nodes which are heads
1215 If a list of heads is specified, return only nodes which are heads
1215 or ancestors of these heads, and return a second element which
1216 or ancestors of these heads, and return a second element which
1216 contains all remote heads which get new children.
1217 contains all remote heads which get new children.
1217 """
1218 """
1218 if base == None:
1219 if base == None:
1219 base = {}
1220 base = {}
1220 self.findincoming(remote, base, heads, force=force)
1221 self.findincoming(remote, base, heads, force=force)
1221
1222
1222 self.ui.debug(_("common changesets up to ")
1223 self.ui.debug(_("common changesets up to ")
1223 + " ".join(map(short, base.keys())) + "\n")
1224 + " ".join(map(short, base.keys())) + "\n")
1224
1225
1225 remain = dict.fromkeys(self.changelog.nodemap)
1226 remain = dict.fromkeys(self.changelog.nodemap)
1226
1227
1227 # prune everything remote has from the tree
1228 # prune everything remote has from the tree
1228 del remain[nullid]
1229 del remain[nullid]
1229 remove = base.keys()
1230 remove = base.keys()
1230 while remove:
1231 while remove:
1231 n = remove.pop(0)
1232 n = remove.pop(0)
1232 if n in remain:
1233 if n in remain:
1233 del remain[n]
1234 del remain[n]
1234 for p in self.changelog.parents(n):
1235 for p in self.changelog.parents(n):
1235 remove.append(p)
1236 remove.append(p)
1236
1237
1237 # find every node whose parents have been pruned
1238 # find every node whose parents have been pruned
1238 subset = []
1239 subset = []
1239 # find every remote head that will get new children
1240 # find every remote head that will get new children
1240 updated_heads = {}
1241 updated_heads = {}
1241 for n in remain:
1242 for n in remain:
1242 p1, p2 = self.changelog.parents(n)
1243 p1, p2 = self.changelog.parents(n)
1243 if p1 not in remain and p2 not in remain:
1244 if p1 not in remain and p2 not in remain:
1244 subset.append(n)
1245 subset.append(n)
1245 if heads:
1246 if heads:
1246 if p1 in heads:
1247 if p1 in heads:
1247 updated_heads[p1] = True
1248 updated_heads[p1] = True
1248 if p2 in heads:
1249 if p2 in heads:
1249 updated_heads[p2] = True
1250 updated_heads[p2] = True
1250
1251
1251 # this is the set of all roots we have to push
1252 # this is the set of all roots we have to push
1252 if heads:
1253 if heads:
1253 return subset, updated_heads.keys()
1254 return subset, updated_heads.keys()
1254 else:
1255 else:
1255 return subset
1256 return subset
1256
1257
1257 def pull(self, remote, heads=None, force=False, lock=None):
1258 def pull(self, remote, heads=None, force=False, lock=None):
1258 mylock = False
1259 mylock = False
1259 if not lock:
1260 if not lock:
1260 lock = self.lock()
1261 lock = self.lock()
1261 mylock = True
1262 mylock = True
1262
1263
1263 try:
1264 try:
1264 fetch = self.findincoming(remote, force=force)
1265 fetch = self.findincoming(remote, force=force)
1265 if fetch == [nullid]:
1266 if fetch == [nullid]:
1266 self.ui.status(_("requesting all changes\n"))
1267 self.ui.status(_("requesting all changes\n"))
1267
1268
1268 if not fetch:
1269 if not fetch:
1269 self.ui.status(_("no changes found\n"))
1270 self.ui.status(_("no changes found\n"))
1270 return 0
1271 return 0
1271
1272
1272 if heads is None:
1273 if heads is None:
1273 cg = remote.changegroup(fetch, 'pull')
1274 cg = remote.changegroup(fetch, 'pull')
1274 else:
1275 else:
1275 if 'changegroupsubset' not in remote.capabilities:
1276 if 'changegroupsubset' not in remote.capabilities:
1276 raise util.Abort(_("Partial pull cannot be done because other repository doesn't support changegroupsubset."))
1277 raise util.Abort(_("Partial pull cannot be done because other repository doesn't support changegroupsubset."))
1277 cg = remote.changegroupsubset(fetch, heads, 'pull')
1278 cg = remote.changegroupsubset(fetch, heads, 'pull')
1278 return self.addchangegroup(cg, 'pull', remote.url())
1279 return self.addchangegroup(cg, 'pull', remote.url())
1279 finally:
1280 finally:
1280 if mylock:
1281 if mylock:
1281 lock.release()
1282 lock.release()
1282
1283
1283 def push(self, remote, force=False, revs=None):
1284 def push(self, remote, force=False, revs=None):
1284 # there are two ways to push to remote repo:
1285 # there are two ways to push to remote repo:
1285 #
1286 #
1286 # addchangegroup assumes local user can lock remote
1287 # addchangegroup assumes local user can lock remote
1287 # repo (local filesystem, old ssh servers).
1288 # repo (local filesystem, old ssh servers).
1288 #
1289 #
1289 # unbundle assumes local user cannot lock remote repo (new ssh
1290 # unbundle assumes local user cannot lock remote repo (new ssh
1290 # servers, http servers).
1291 # servers, http servers).
1291
1292
1292 if remote.capable('unbundle'):
1293 if remote.capable('unbundle'):
1293 return self.push_unbundle(remote, force, revs)
1294 return self.push_unbundle(remote, force, revs)
1294 return self.push_addchangegroup(remote, force, revs)
1295 return self.push_addchangegroup(remote, force, revs)
1295
1296
1296 def prepush(self, remote, force, revs):
1297 def prepush(self, remote, force, revs):
1297 base = {}
1298 base = {}
1298 remote_heads = remote.heads()
1299 remote_heads = remote.heads()
1299 inc = self.findincoming(remote, base, remote_heads, force=force)
1300 inc = self.findincoming(remote, base, remote_heads, force=force)
1300
1301
1301 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1302 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1302 if revs is not None:
1303 if revs is not None:
1303 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1304 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1304 else:
1305 else:
1305 bases, heads = update, self.changelog.heads()
1306 bases, heads = update, self.changelog.heads()
1306
1307
1307 if not bases:
1308 if not bases:
1308 self.ui.status(_("no changes found\n"))
1309 self.ui.status(_("no changes found\n"))
1309 return None, 1
1310 return None, 1
1310 elif not force:
1311 elif not force:
1311 # check if we're creating new remote heads
1312 # check if we're creating new remote heads
1312 # to be a remote head after push, node must be either
1313 # to be a remote head after push, node must be either
1313 # - unknown locally
1314 # - unknown locally
1314 # - a local outgoing head descended from update
1315 # - a local outgoing head descended from update
1315 # - a remote head that's known locally and not
1316 # - a remote head that's known locally and not
1316 # ancestral to an outgoing head
1317 # ancestral to an outgoing head
1317
1318
1318 warn = 0
1319 warn = 0
1319
1320
1320 if remote_heads == [nullid]:
1321 if remote_heads == [nullid]:
1321 warn = 0
1322 warn = 0
1322 elif not revs and len(heads) > len(remote_heads):
1323 elif not revs and len(heads) > len(remote_heads):
1323 warn = 1
1324 warn = 1
1324 else:
1325 else:
1325 newheads = list(heads)
1326 newheads = list(heads)
1326 for r in remote_heads:
1327 for r in remote_heads:
1327 if r in self.changelog.nodemap:
1328 if r in self.changelog.nodemap:
1328 desc = self.changelog.heads(r)
1329 desc = self.changelog.heads(r)
1329 l = [h for h in heads if h in desc]
1330 l = [h for h in heads if h in desc]
1330 if not l:
1331 if not l:
1331 newheads.append(r)
1332 newheads.append(r)
1332 else:
1333 else:
1333 newheads.append(r)
1334 newheads.append(r)
1334 if len(newheads) > len(remote_heads):
1335 if len(newheads) > len(remote_heads):
1335 warn = 1
1336 warn = 1
1336
1337
1337 if warn:
1338 if warn:
1338 self.ui.warn(_("abort: push creates new remote branches!\n"))
1339 self.ui.warn(_("abort: push creates new remote branches!\n"))
1339 self.ui.status(_("(did you forget to merge?"
1340 self.ui.status(_("(did you forget to merge?"
1340 " use push -f to force)\n"))
1341 " use push -f to force)\n"))
1341 return None, 1
1342 return None, 1
1342 elif inc:
1343 elif inc:
1343 self.ui.warn(_("note: unsynced remote changes!\n"))
1344 self.ui.warn(_("note: unsynced remote changes!\n"))
1344
1345
1345
1346
1346 if revs is None:
1347 if revs is None:
1347 cg = self.changegroup(update, 'push')
1348 cg = self.changegroup(update, 'push')
1348 else:
1349 else:
1349 cg = self.changegroupsubset(update, revs, 'push')
1350 cg = self.changegroupsubset(update, revs, 'push')
1350 return cg, remote_heads
1351 return cg, remote_heads
1351
1352
1352 def push_addchangegroup(self, remote, force, revs):
1353 def push_addchangegroup(self, remote, force, revs):
1353 lock = remote.lock()
1354 lock = remote.lock()
1354
1355
1355 ret = self.prepush(remote, force, revs)
1356 ret = self.prepush(remote, force, revs)
1356 if ret[0] is not None:
1357 if ret[0] is not None:
1357 cg, remote_heads = ret
1358 cg, remote_heads = ret
1358 return remote.addchangegroup(cg, 'push', self.url())
1359 return remote.addchangegroup(cg, 'push', self.url())
1359 return ret[1]
1360 return ret[1]
1360
1361
1361 def push_unbundle(self, remote, force, revs):
1362 def push_unbundle(self, remote, force, revs):
1362 # local repo finds heads on server, finds out what revs it
1363 # local repo finds heads on server, finds out what revs it
1363 # must push. once revs transferred, if server finds it has
1364 # must push. once revs transferred, if server finds it has
1364 # different heads (someone else won commit/push race), server
1365 # different heads (someone else won commit/push race), server
1365 # aborts.
1366 # aborts.
1366
1367
1367 ret = self.prepush(remote, force, revs)
1368 ret = self.prepush(remote, force, revs)
1368 if ret[0] is not None:
1369 if ret[0] is not None:
1369 cg, remote_heads = ret
1370 cg, remote_heads = ret
1370 if force: remote_heads = ['force']
1371 if force: remote_heads = ['force']
1371 return remote.unbundle(cg, remote_heads, 'push')
1372 return remote.unbundle(cg, remote_heads, 'push')
1372 return ret[1]
1373 return ret[1]
1373
1374
1374 def changegroupinfo(self, nodes):
1375 def changegroupinfo(self, nodes):
1375 self.ui.note(_("%d changesets found\n") % len(nodes))
1376 self.ui.note(_("%d changesets found\n") % len(nodes))
1376 if self.ui.debugflag:
1377 if self.ui.debugflag:
1377 self.ui.debug(_("List of changesets:\n"))
1378 self.ui.debug(_("List of changesets:\n"))
1378 for node in nodes:
1379 for node in nodes:
1379 self.ui.debug("%s\n" % hex(node))
1380 self.ui.debug("%s\n" % hex(node))
1380
1381
1381 def changegroupsubset(self, bases, heads, source):
1382 def changegroupsubset(self, bases, heads, source):
1382 """This function generates a changegroup consisting of all the nodes
1383 """This function generates a changegroup consisting of all the nodes
1383 that are descendents of any of the bases, and ancestors of any of
1384 that are descendents of any of the bases, and ancestors of any of
1384 the heads.
1385 the heads.
1385
1386
1386 It is fairly complex as determining which filenodes and which
1387 It is fairly complex as determining which filenodes and which
1387 manifest nodes need to be included for the changeset to be complete
1388 manifest nodes need to be included for the changeset to be complete
1388 is non-trivial.
1389 is non-trivial.
1389
1390
1390 Another wrinkle is doing the reverse, figuring out which changeset in
1391 Another wrinkle is doing the reverse, figuring out which changeset in
1391 the changegroup a particular filenode or manifestnode belongs to."""
1392 the changegroup a particular filenode or manifestnode belongs to."""
1392
1393
1393 self.hook('preoutgoing', throw=True, source=source)
1394 self.hook('preoutgoing', throw=True, source=source)
1394
1395
1395 # Set up some initial variables
1396 # Set up some initial variables
1396 # Make it easy to refer to self.changelog
1397 # Make it easy to refer to self.changelog
1397 cl = self.changelog
1398 cl = self.changelog
1398 # msng is short for missing - compute the list of changesets in this
1399 # msng is short for missing - compute the list of changesets in this
1399 # changegroup.
1400 # changegroup.
1400 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1401 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1401 self.changegroupinfo(msng_cl_lst)
1402 self.changegroupinfo(msng_cl_lst)
1402 # Some bases may turn out to be superfluous, and some heads may be
1403 # Some bases may turn out to be superfluous, and some heads may be
1403 # too. nodesbetween will return the minimal set of bases and heads
1404 # too. nodesbetween will return the minimal set of bases and heads
1404 # necessary to re-create the changegroup.
1405 # necessary to re-create the changegroup.
1405
1406
1406 # Known heads are the list of heads that it is assumed the recipient
1407 # Known heads are the list of heads that it is assumed the recipient
1407 # of this changegroup will know about.
1408 # of this changegroup will know about.
1408 knownheads = {}
1409 knownheads = {}
1409 # We assume that all parents of bases are known heads.
1410 # We assume that all parents of bases are known heads.
1410 for n in bases:
1411 for n in bases:
1411 for p in cl.parents(n):
1412 for p in cl.parents(n):
1412 if p != nullid:
1413 if p != nullid:
1413 knownheads[p] = 1
1414 knownheads[p] = 1
1414 knownheads = knownheads.keys()
1415 knownheads = knownheads.keys()
1415 if knownheads:
1416 if knownheads:
1416 # Now that we know what heads are known, we can compute which
1417 # Now that we know what heads are known, we can compute which
1417 # changesets are known. The recipient must know about all
1418 # changesets are known. The recipient must know about all
1418 # changesets required to reach the known heads from the null
1419 # changesets required to reach the known heads from the null
1419 # changeset.
1420 # changeset.
1420 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1421 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1421 junk = None
1422 junk = None
1422 # Transform the list into an ersatz set.
1423 # Transform the list into an ersatz set.
1423 has_cl_set = dict.fromkeys(has_cl_set)
1424 has_cl_set = dict.fromkeys(has_cl_set)
1424 else:
1425 else:
1425 # If there were no known heads, the recipient cannot be assumed to
1426 # If there were no known heads, the recipient cannot be assumed to
1426 # know about any changesets.
1427 # know about any changesets.
1427 has_cl_set = {}
1428 has_cl_set = {}
1428
1429
1429 # Make it easy to refer to self.manifest
1430 # Make it easy to refer to self.manifest
1430 mnfst = self.manifest
1431 mnfst = self.manifest
1431 # We don't know which manifests are missing yet
1432 # We don't know which manifests are missing yet
1432 msng_mnfst_set = {}
1433 msng_mnfst_set = {}
1433 # Nor do we know which filenodes are missing.
1434 # Nor do we know which filenodes are missing.
1434 msng_filenode_set = {}
1435 msng_filenode_set = {}
1435
1436
1436 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1437 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1437 junk = None
1438 junk = None
1438
1439
1439 # A changeset always belongs to itself, so the changenode lookup
1440 # A changeset always belongs to itself, so the changenode lookup
1440 # function for a changenode is identity.
1441 # function for a changenode is identity.
1441 def identity(x):
1442 def identity(x):
1442 return x
1443 return x
1443
1444
1444 # A function generating function. Sets up an environment for the
1445 # A function generating function. Sets up an environment for the
1445 # inner function.
1446 # inner function.
1446 def cmp_by_rev_func(revlog):
1447 def cmp_by_rev_func(revlog):
1447 # Compare two nodes by their revision number in the environment's
1448 # Compare two nodes by their revision number in the environment's
1448 # revision history. Since the revision number both represents the
1449 # revision history. Since the revision number both represents the
1449 # most efficient order to read the nodes in, and represents a
1450 # most efficient order to read the nodes in, and represents a
1450 # topological sorting of the nodes, this function is often useful.
1451 # topological sorting of the nodes, this function is often useful.
1451 def cmp_by_rev(a, b):
1452 def cmp_by_rev(a, b):
1452 return cmp(revlog.rev(a), revlog.rev(b))
1453 return cmp(revlog.rev(a), revlog.rev(b))
1453 return cmp_by_rev
1454 return cmp_by_rev
1454
1455
1455 # If we determine that a particular file or manifest node must be a
1456 # If we determine that a particular file or manifest node must be a
1456 # node that the recipient of the changegroup will already have, we can
1457 # node that the recipient of the changegroup will already have, we can
1457 # also assume the recipient will have all the parents. This function
1458 # also assume the recipient will have all the parents. This function
1458 # prunes them from the set of missing nodes.
1459 # prunes them from the set of missing nodes.
1459 def prune_parents(revlog, hasset, msngset):
1460 def prune_parents(revlog, hasset, msngset):
1460 haslst = hasset.keys()
1461 haslst = hasset.keys()
1461 haslst.sort(cmp_by_rev_func(revlog))
1462 haslst.sort(cmp_by_rev_func(revlog))
1462 for node in haslst:
1463 for node in haslst:
1463 parentlst = [p for p in revlog.parents(node) if p != nullid]
1464 parentlst = [p for p in revlog.parents(node) if p != nullid]
1464 while parentlst:
1465 while parentlst:
1465 n = parentlst.pop()
1466 n = parentlst.pop()
1466 if n not in hasset:
1467 if n not in hasset:
1467 hasset[n] = 1
1468 hasset[n] = 1
1468 p = [p for p in revlog.parents(n) if p != nullid]
1469 p = [p for p in revlog.parents(n) if p != nullid]
1469 parentlst.extend(p)
1470 parentlst.extend(p)
1470 for n in hasset:
1471 for n in hasset:
1471 msngset.pop(n, None)
1472 msngset.pop(n, None)
1472
1473
1473 # This is a function generating function used to set up an environment
1474 # This is a function generating function used to set up an environment
1474 # for the inner function to execute in.
1475 # for the inner function to execute in.
1475 def manifest_and_file_collector(changedfileset):
1476 def manifest_and_file_collector(changedfileset):
1476 # This is an information gathering function that gathers
1477 # This is an information gathering function that gathers
1477 # information from each changeset node that goes out as part of
1478 # information from each changeset node that goes out as part of
1478 # the changegroup. The information gathered is a list of which
1479 # the changegroup. The information gathered is a list of which
1479 # manifest nodes are potentially required (the recipient may
1480 # manifest nodes are potentially required (the recipient may
1480 # already have them) and total list of all files which were
1481 # already have them) and total list of all files which were
1481 # changed in any changeset in the changegroup.
1482 # changed in any changeset in the changegroup.
1482 #
1483 #
1483 # We also remember the first changenode we saw any manifest
1484 # We also remember the first changenode we saw any manifest
1484 # referenced by so we can later determine which changenode 'owns'
1485 # referenced by so we can later determine which changenode 'owns'
1485 # the manifest.
1486 # the manifest.
1486 def collect_manifests_and_files(clnode):
1487 def collect_manifests_and_files(clnode):
1487 c = cl.read(clnode)
1488 c = cl.read(clnode)
1488 for f in c[3]:
1489 for f in c[3]:
1489 # This is to make sure we only have one instance of each
1490 # This is to make sure we only have one instance of each
1490 # filename string for each filename.
1491 # filename string for each filename.
1491 changedfileset.setdefault(f, f)
1492 changedfileset.setdefault(f, f)
1492 msng_mnfst_set.setdefault(c[0], clnode)
1493 msng_mnfst_set.setdefault(c[0], clnode)
1493 return collect_manifests_and_files
1494 return collect_manifests_and_files
1494
1495
1495 # Figure out which manifest nodes (of the ones we think might be part
1496 # Figure out which manifest nodes (of the ones we think might be part
1496 # of the changegroup) the recipient must know about and remove them
1497 # of the changegroup) the recipient must know about and remove them
1497 # from the changegroup.
1498 # from the changegroup.
1498 def prune_manifests():
1499 def prune_manifests():
1499 has_mnfst_set = {}
1500 has_mnfst_set = {}
1500 for n in msng_mnfst_set:
1501 for n in msng_mnfst_set:
1501 # If a 'missing' manifest thinks it belongs to a changenode
1502 # If a 'missing' manifest thinks it belongs to a changenode
1502 # the recipient is assumed to have, obviously the recipient
1503 # the recipient is assumed to have, obviously the recipient
1503 # must have that manifest.
1504 # must have that manifest.
1504 linknode = cl.node(mnfst.linkrev(n))
1505 linknode = cl.node(mnfst.linkrev(n))
1505 if linknode in has_cl_set:
1506 if linknode in has_cl_set:
1506 has_mnfst_set[n] = 1
1507 has_mnfst_set[n] = 1
1507 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1508 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1508
1509
1509 # Use the information collected in collect_manifests_and_files to say
1510 # Use the information collected in collect_manifests_and_files to say
1510 # which changenode any manifestnode belongs to.
1511 # which changenode any manifestnode belongs to.
1511 def lookup_manifest_link(mnfstnode):
1512 def lookup_manifest_link(mnfstnode):
1512 return msng_mnfst_set[mnfstnode]
1513 return msng_mnfst_set[mnfstnode]
1513
1514
1514 # A function generating function that sets up the initial environment
1515 # A function generating function that sets up the initial environment
1515 # the inner function.
1516 # the inner function.
1516 def filenode_collector(changedfiles):
1517 def filenode_collector(changedfiles):
1517 next_rev = [0]
1518 next_rev = [0]
1518 # This gathers information from each manifestnode included in the
1519 # This gathers information from each manifestnode included in the
1519 # changegroup about which filenodes the manifest node references
1520 # changegroup about which filenodes the manifest node references
1520 # so we can include those in the changegroup too.
1521 # so we can include those in the changegroup too.
1521 #
1522 #
1522 # It also remembers which changenode each filenode belongs to. It
1523 # It also remembers which changenode each filenode belongs to. It
1523 # does this by assuming the a filenode belongs to the changenode
1524 # does this by assuming the a filenode belongs to the changenode
1524 # the first manifest that references it belongs to.
1525 # the first manifest that references it belongs to.
1525 def collect_msng_filenodes(mnfstnode):
1526 def collect_msng_filenodes(mnfstnode):
1526 r = mnfst.rev(mnfstnode)
1527 r = mnfst.rev(mnfstnode)
1527 if r == next_rev[0]:
1528 if r == next_rev[0]:
1528 # If the last rev we looked at was the one just previous,
1529 # If the last rev we looked at was the one just previous,
1529 # we only need to see a diff.
1530 # we only need to see a diff.
1530 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1531 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1531 # For each line in the delta
1532 # For each line in the delta
1532 for dline in delta.splitlines():
1533 for dline in delta.splitlines():
1533 # get the filename and filenode for that line
1534 # get the filename and filenode for that line
1534 f, fnode = dline.split('\0')
1535 f, fnode = dline.split('\0')
1535 fnode = bin(fnode[:40])
1536 fnode = bin(fnode[:40])
1536 f = changedfiles.get(f, None)
1537 f = changedfiles.get(f, None)
1537 # And if the file is in the list of files we care
1538 # And if the file is in the list of files we care
1538 # about.
1539 # about.
1539 if f is not None:
1540 if f is not None:
1540 # Get the changenode this manifest belongs to
1541 # Get the changenode this manifest belongs to
1541 clnode = msng_mnfst_set[mnfstnode]
1542 clnode = msng_mnfst_set[mnfstnode]
1542 # Create the set of filenodes for the file if
1543 # Create the set of filenodes for the file if
1543 # there isn't one already.
1544 # there isn't one already.
1544 ndset = msng_filenode_set.setdefault(f, {})
1545 ndset = msng_filenode_set.setdefault(f, {})
1545 # And set the filenode's changelog node to the
1546 # And set the filenode's changelog node to the
1546 # manifest's if it hasn't been set already.
1547 # manifest's if it hasn't been set already.
1547 ndset.setdefault(fnode, clnode)
1548 ndset.setdefault(fnode, clnode)
1548 else:
1549 else:
1549 # Otherwise we need a full manifest.
1550 # Otherwise we need a full manifest.
1550 m = mnfst.read(mnfstnode)
1551 m = mnfst.read(mnfstnode)
1551 # For every file in we care about.
1552 # For every file in we care about.
1552 for f in changedfiles:
1553 for f in changedfiles:
1553 fnode = m.get(f, None)
1554 fnode = m.get(f, None)
1554 # If it's in the manifest
1555 # If it's in the manifest
1555 if fnode is not None:
1556 if fnode is not None:
1556 # See comments above.
1557 # See comments above.
1557 clnode = msng_mnfst_set[mnfstnode]
1558 clnode = msng_mnfst_set[mnfstnode]
1558 ndset = msng_filenode_set.setdefault(f, {})
1559 ndset = msng_filenode_set.setdefault(f, {})
1559 ndset.setdefault(fnode, clnode)
1560 ndset.setdefault(fnode, clnode)
1560 # Remember the revision we hope to see next.
1561 # Remember the revision we hope to see next.
1561 next_rev[0] = r + 1
1562 next_rev[0] = r + 1
1562 return collect_msng_filenodes
1563 return collect_msng_filenodes
1563
1564
1564 # We have a list of filenodes we think we need for a file, lets remove
1565 # We have a list of filenodes we think we need for a file, lets remove
1565 # all those we now the recipient must have.
1566 # all those we now the recipient must have.
1566 def prune_filenodes(f, filerevlog):
1567 def prune_filenodes(f, filerevlog):
1567 msngset = msng_filenode_set[f]
1568 msngset = msng_filenode_set[f]
1568 hasset = {}
1569 hasset = {}
1569 # If a 'missing' filenode thinks it belongs to a changenode we
1570 # If a 'missing' filenode thinks it belongs to a changenode we
1570 # assume the recipient must have, then the recipient must have
1571 # assume the recipient must have, then the recipient must have
1571 # that filenode.
1572 # that filenode.
1572 for n in msngset:
1573 for n in msngset:
1573 clnode = cl.node(filerevlog.linkrev(n))
1574 clnode = cl.node(filerevlog.linkrev(n))
1574 if clnode in has_cl_set:
1575 if clnode in has_cl_set:
1575 hasset[n] = 1
1576 hasset[n] = 1
1576 prune_parents(filerevlog, hasset, msngset)
1577 prune_parents(filerevlog, hasset, msngset)
1577
1578
1578 # A function generator function that sets up the a context for the
1579 # A function generator function that sets up the a context for the
1579 # inner function.
1580 # inner function.
1580 def lookup_filenode_link_func(fname):
1581 def lookup_filenode_link_func(fname):
1581 msngset = msng_filenode_set[fname]
1582 msngset = msng_filenode_set[fname]
1582 # Lookup the changenode the filenode belongs to.
1583 # Lookup the changenode the filenode belongs to.
1583 def lookup_filenode_link(fnode):
1584 def lookup_filenode_link(fnode):
1584 return msngset[fnode]
1585 return msngset[fnode]
1585 return lookup_filenode_link
1586 return lookup_filenode_link
1586
1587
1587 # Now that we have all theses utility functions to help out and
1588 # Now that we have all theses utility functions to help out and
1588 # logically divide up the task, generate the group.
1589 # logically divide up the task, generate the group.
1589 def gengroup():
1590 def gengroup():
1590 # The set of changed files starts empty.
1591 # The set of changed files starts empty.
1591 changedfiles = {}
1592 changedfiles = {}
1592 # Create a changenode group generator that will call our functions
1593 # Create a changenode group generator that will call our functions
1593 # back to lookup the owning changenode and collect information.
1594 # back to lookup the owning changenode and collect information.
1594 group = cl.group(msng_cl_lst, identity,
1595 group = cl.group(msng_cl_lst, identity,
1595 manifest_and_file_collector(changedfiles))
1596 manifest_and_file_collector(changedfiles))
1596 for chnk in group:
1597 for chnk in group:
1597 yield chnk
1598 yield chnk
1598
1599
1599 # The list of manifests has been collected by the generator
1600 # The list of manifests has been collected by the generator
1600 # calling our functions back.
1601 # calling our functions back.
1601 prune_manifests()
1602 prune_manifests()
1602 msng_mnfst_lst = msng_mnfst_set.keys()
1603 msng_mnfst_lst = msng_mnfst_set.keys()
1603 # Sort the manifestnodes by revision number.
1604 # Sort the manifestnodes by revision number.
1604 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1605 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1605 # Create a generator for the manifestnodes that calls our lookup
1606 # Create a generator for the manifestnodes that calls our lookup
1606 # and data collection functions back.
1607 # and data collection functions back.
1607 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1608 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1608 filenode_collector(changedfiles))
1609 filenode_collector(changedfiles))
1609 for chnk in group:
1610 for chnk in group:
1610 yield chnk
1611 yield chnk
1611
1612
1612 # These are no longer needed, dereference and toss the memory for
1613 # These are no longer needed, dereference and toss the memory for
1613 # them.
1614 # them.
1614 msng_mnfst_lst = None
1615 msng_mnfst_lst = None
1615 msng_mnfst_set.clear()
1616 msng_mnfst_set.clear()
1616
1617
1617 changedfiles = changedfiles.keys()
1618 changedfiles = changedfiles.keys()
1618 changedfiles.sort()
1619 changedfiles.sort()
1619 # Go through all our files in order sorted by name.
1620 # Go through all our files in order sorted by name.
1620 for fname in changedfiles:
1621 for fname in changedfiles:
1621 filerevlog = self.file(fname)
1622 filerevlog = self.file(fname)
1622 # Toss out the filenodes that the recipient isn't really
1623 # Toss out the filenodes that the recipient isn't really
1623 # missing.
1624 # missing.
1624 if msng_filenode_set.has_key(fname):
1625 if msng_filenode_set.has_key(fname):
1625 prune_filenodes(fname, filerevlog)
1626 prune_filenodes(fname, filerevlog)
1626 msng_filenode_lst = msng_filenode_set[fname].keys()
1627 msng_filenode_lst = msng_filenode_set[fname].keys()
1627 else:
1628 else:
1628 msng_filenode_lst = []
1629 msng_filenode_lst = []
1629 # If any filenodes are left, generate the group for them,
1630 # If any filenodes are left, generate the group for them,
1630 # otherwise don't bother.
1631 # otherwise don't bother.
1631 if len(msng_filenode_lst) > 0:
1632 if len(msng_filenode_lst) > 0:
1632 yield changegroup.genchunk(fname)
1633 yield changegroup.genchunk(fname)
1633 # Sort the filenodes by their revision #
1634 # Sort the filenodes by their revision #
1634 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1635 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1635 # Create a group generator and only pass in a changenode
1636 # Create a group generator and only pass in a changenode
1636 # lookup function as we need to collect no information
1637 # lookup function as we need to collect no information
1637 # from filenodes.
1638 # from filenodes.
1638 group = filerevlog.group(msng_filenode_lst,
1639 group = filerevlog.group(msng_filenode_lst,
1639 lookup_filenode_link_func(fname))
1640 lookup_filenode_link_func(fname))
1640 for chnk in group:
1641 for chnk in group:
1641 yield chnk
1642 yield chnk
1642 if msng_filenode_set.has_key(fname):
1643 if msng_filenode_set.has_key(fname):
1643 # Don't need this anymore, toss it to free memory.
1644 # Don't need this anymore, toss it to free memory.
1644 del msng_filenode_set[fname]
1645 del msng_filenode_set[fname]
1645 # Signal that no more groups are left.
1646 # Signal that no more groups are left.
1646 yield changegroup.closechunk()
1647 yield changegroup.closechunk()
1647
1648
1648 if msng_cl_lst:
1649 if msng_cl_lst:
1649 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1650 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1650
1651
1651 return util.chunkbuffer(gengroup())
1652 return util.chunkbuffer(gengroup())
1652
1653
1653 def changegroup(self, basenodes, source):
1654 def changegroup(self, basenodes, source):
1654 """Generate a changegroup of all nodes that we have that a recipient
1655 """Generate a changegroup of all nodes that we have that a recipient
1655 doesn't.
1656 doesn't.
1656
1657
1657 This is much easier than the previous function as we can assume that
1658 This is much easier than the previous function as we can assume that
1658 the recipient has any changenode we aren't sending them."""
1659 the recipient has any changenode we aren't sending them."""
1659
1660
1660 self.hook('preoutgoing', throw=True, source=source)
1661 self.hook('preoutgoing', throw=True, source=source)
1661
1662
1662 cl = self.changelog
1663 cl = self.changelog
1663 nodes = cl.nodesbetween(basenodes, None)[0]
1664 nodes = cl.nodesbetween(basenodes, None)[0]
1664 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1665 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1665 self.changegroupinfo(nodes)
1666 self.changegroupinfo(nodes)
1666
1667
1667 def identity(x):
1668 def identity(x):
1668 return x
1669 return x
1669
1670
1670 def gennodelst(revlog):
1671 def gennodelst(revlog):
1671 for r in xrange(0, revlog.count()):
1672 for r in xrange(0, revlog.count()):
1672 n = revlog.node(r)
1673 n = revlog.node(r)
1673 if revlog.linkrev(n) in revset:
1674 if revlog.linkrev(n) in revset:
1674 yield n
1675 yield n
1675
1676
1676 def changed_file_collector(changedfileset):
1677 def changed_file_collector(changedfileset):
1677 def collect_changed_files(clnode):
1678 def collect_changed_files(clnode):
1678 c = cl.read(clnode)
1679 c = cl.read(clnode)
1679 for fname in c[3]:
1680 for fname in c[3]:
1680 changedfileset[fname] = 1
1681 changedfileset[fname] = 1
1681 return collect_changed_files
1682 return collect_changed_files
1682
1683
1683 def lookuprevlink_func(revlog):
1684 def lookuprevlink_func(revlog):
1684 def lookuprevlink(n):
1685 def lookuprevlink(n):
1685 return cl.node(revlog.linkrev(n))
1686 return cl.node(revlog.linkrev(n))
1686 return lookuprevlink
1687 return lookuprevlink
1687
1688
1688 def gengroup():
1689 def gengroup():
1689 # construct a list of all changed files
1690 # construct a list of all changed files
1690 changedfiles = {}
1691 changedfiles = {}
1691
1692
1692 for chnk in cl.group(nodes, identity,
1693 for chnk in cl.group(nodes, identity,
1693 changed_file_collector(changedfiles)):
1694 changed_file_collector(changedfiles)):
1694 yield chnk
1695 yield chnk
1695 changedfiles = changedfiles.keys()
1696 changedfiles = changedfiles.keys()
1696 changedfiles.sort()
1697 changedfiles.sort()
1697
1698
1698 mnfst = self.manifest
1699 mnfst = self.manifest
1699 nodeiter = gennodelst(mnfst)
1700 nodeiter = gennodelst(mnfst)
1700 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1701 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1701 yield chnk
1702 yield chnk
1702
1703
1703 for fname in changedfiles:
1704 for fname in changedfiles:
1704 filerevlog = self.file(fname)
1705 filerevlog = self.file(fname)
1705 nodeiter = gennodelst(filerevlog)
1706 nodeiter = gennodelst(filerevlog)
1706 nodeiter = list(nodeiter)
1707 nodeiter = list(nodeiter)
1707 if nodeiter:
1708 if nodeiter:
1708 yield changegroup.genchunk(fname)
1709 yield changegroup.genchunk(fname)
1709 lookup = lookuprevlink_func(filerevlog)
1710 lookup = lookuprevlink_func(filerevlog)
1710 for chnk in filerevlog.group(nodeiter, lookup):
1711 for chnk in filerevlog.group(nodeiter, lookup):
1711 yield chnk
1712 yield chnk
1712
1713
1713 yield changegroup.closechunk()
1714 yield changegroup.closechunk()
1714
1715
1715 if nodes:
1716 if nodes:
1716 self.hook('outgoing', node=hex(nodes[0]), source=source)
1717 self.hook('outgoing', node=hex(nodes[0]), source=source)
1717
1718
1718 return util.chunkbuffer(gengroup())
1719 return util.chunkbuffer(gengroup())
1719
1720
1720 def addchangegroup(self, source, srctype, url):
1721 def addchangegroup(self, source, srctype, url):
1721 """add changegroup to repo.
1722 """add changegroup to repo.
1722 returns number of heads modified or added + 1."""
1723 returns number of heads modified or added + 1."""
1723
1724
1724 def csmap(x):
1725 def csmap(x):
1725 self.ui.debug(_("add changeset %s\n") % short(x))
1726 self.ui.debug(_("add changeset %s\n") % short(x))
1726 return cl.count()
1727 return cl.count()
1727
1728
1728 def revmap(x):
1729 def revmap(x):
1729 return cl.rev(x)
1730 return cl.rev(x)
1730
1731
1731 if not source:
1732 if not source:
1732 return 0
1733 return 0
1733
1734
1734 self.hook('prechangegroup', throw=True, source=srctype, url=url)
1735 self.hook('prechangegroup', throw=True, source=srctype, url=url)
1735
1736
1736 changesets = files = revisions = 0
1737 changesets = files = revisions = 0
1737
1738
1738 tr = self.transaction()
1739 tr = self.transaction()
1739
1740
1740 # write changelog data to temp files so concurrent readers will not see
1741 # write changelog data to temp files so concurrent readers will not see
1741 # inconsistent view
1742 # inconsistent view
1742 cl = None
1743 cl = None
1743 try:
1744 try:
1744 cl = appendfile.appendchangelog(self.sopener,
1745 cl = appendfile.appendchangelog(self.sopener,
1745 self.changelog.version)
1746 self.changelog.version)
1746
1747
1747 oldheads = len(cl.heads())
1748 oldheads = len(cl.heads())
1748
1749
1749 # pull off the changeset group
1750 # pull off the changeset group
1750 self.ui.status(_("adding changesets\n"))
1751 self.ui.status(_("adding changesets\n"))
1751 cor = cl.count() - 1
1752 cor = cl.count() - 1
1752 chunkiter = changegroup.chunkiter(source)
1753 chunkiter = changegroup.chunkiter(source)
1753 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1754 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1754 raise util.Abort(_("received changelog group is empty"))
1755 raise util.Abort(_("received changelog group is empty"))
1755 cnr = cl.count() - 1
1756 cnr = cl.count() - 1
1756 changesets = cnr - cor
1757 changesets = cnr - cor
1757
1758
1758 # pull off the manifest group
1759 # pull off the manifest group
1759 self.ui.status(_("adding manifests\n"))
1760 self.ui.status(_("adding manifests\n"))
1760 chunkiter = changegroup.chunkiter(source)
1761 chunkiter = changegroup.chunkiter(source)
1761 # no need to check for empty manifest group here:
1762 # no need to check for empty manifest group here:
1762 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1763 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1763 # no new manifest will be created and the manifest group will
1764 # no new manifest will be created and the manifest group will
1764 # be empty during the pull
1765 # be empty during the pull
1765 self.manifest.addgroup(chunkiter, revmap, tr)
1766 self.manifest.addgroup(chunkiter, revmap, tr)
1766
1767
1767 # process the files
1768 # process the files
1768 self.ui.status(_("adding file changes\n"))
1769 self.ui.status(_("adding file changes\n"))
1769 while 1:
1770 while 1:
1770 f = changegroup.getchunk(source)
1771 f = changegroup.getchunk(source)
1771 if not f:
1772 if not f:
1772 break
1773 break
1773 self.ui.debug(_("adding %s revisions\n") % f)
1774 self.ui.debug(_("adding %s revisions\n") % f)
1774 fl = self.file(f)
1775 fl = self.file(f)
1775 o = fl.count()
1776 o = fl.count()
1776 chunkiter = changegroup.chunkiter(source)
1777 chunkiter = changegroup.chunkiter(source)
1777 if fl.addgroup(chunkiter, revmap, tr) is None:
1778 if fl.addgroup(chunkiter, revmap, tr) is None:
1778 raise util.Abort(_("received file revlog group is empty"))
1779 raise util.Abort(_("received file revlog group is empty"))
1779 revisions += fl.count() - o
1780 revisions += fl.count() - o
1780 files += 1
1781 files += 1
1781
1782
1782 cl.writedata()
1783 cl.writedata()
1783 finally:
1784 finally:
1784 if cl:
1785 if cl:
1785 cl.cleanup()
1786 cl.cleanup()
1786
1787
1787 # make changelog see real files again
1788 # make changelog see real files again
1788 self.changelog = changelog.changelog(self.sopener,
1789 self.changelog = changelog.changelog(self.sopener,
1789 self.changelog.version)
1790 self.changelog.version)
1790 self.changelog.checkinlinesize(tr)
1791 self.changelog.checkinlinesize(tr)
1791
1792
1792 newheads = len(self.changelog.heads())
1793 newheads = len(self.changelog.heads())
1793 heads = ""
1794 heads = ""
1794 if oldheads and newheads != oldheads:
1795 if oldheads and newheads != oldheads:
1795 heads = _(" (%+d heads)") % (newheads - oldheads)
1796 heads = _(" (%+d heads)") % (newheads - oldheads)
1796
1797
1797 self.ui.status(_("added %d changesets"
1798 self.ui.status(_("added %d changesets"
1798 " with %d changes to %d files%s\n")
1799 " with %d changes to %d files%s\n")
1799 % (changesets, revisions, files, heads))
1800 % (changesets, revisions, files, heads))
1800
1801
1801 if changesets > 0:
1802 if changesets > 0:
1802 self.hook('pretxnchangegroup', throw=True,
1803 self.hook('pretxnchangegroup', throw=True,
1803 node=hex(self.changelog.node(cor+1)), source=srctype,
1804 node=hex(self.changelog.node(cor+1)), source=srctype,
1804 url=url)
1805 url=url)
1805
1806
1806 tr.close()
1807 tr.close()
1807
1808
1808 if changesets > 0:
1809 if changesets > 0:
1809 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1810 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1810 source=srctype, url=url)
1811 source=srctype, url=url)
1811
1812
1812 for i in xrange(cor + 1, cnr + 1):
1813 for i in xrange(cor + 1, cnr + 1):
1813 self.hook("incoming", node=hex(self.changelog.node(i)),
1814 self.hook("incoming", node=hex(self.changelog.node(i)),
1814 source=srctype, url=url)
1815 source=srctype, url=url)
1815
1816
1816 return newheads - oldheads + 1
1817 return newheads - oldheads + 1
1817
1818
1818
1819
1819 def stream_in(self, remote):
1820 def stream_in(self, remote):
1820 fp = remote.stream_out()
1821 fp = remote.stream_out()
1821 l = fp.readline()
1822 l = fp.readline()
1822 try:
1823 try:
1823 resp = int(l)
1824 resp = int(l)
1824 except ValueError:
1825 except ValueError:
1825 raise util.UnexpectedOutput(
1826 raise util.UnexpectedOutput(
1826 _('Unexpected response from remote server:'), l)
1827 _('Unexpected response from remote server:'), l)
1827 if resp == 1:
1828 if resp == 1:
1828 raise util.Abort(_('operation forbidden by server'))
1829 raise util.Abort(_('operation forbidden by server'))
1829 elif resp == 2:
1830 elif resp == 2:
1830 raise util.Abort(_('locking the remote repository failed'))
1831 raise util.Abort(_('locking the remote repository failed'))
1831 elif resp != 0:
1832 elif resp != 0:
1832 raise util.Abort(_('the server sent an unknown error code'))
1833 raise util.Abort(_('the server sent an unknown error code'))
1833 self.ui.status(_('streaming all changes\n'))
1834 self.ui.status(_('streaming all changes\n'))
1834 l = fp.readline()
1835 l = fp.readline()
1835 try:
1836 try:
1836 total_files, total_bytes = map(int, l.split(' ', 1))
1837 total_files, total_bytes = map(int, l.split(' ', 1))
1837 except ValueError, TypeError:
1838 except ValueError, TypeError:
1838 raise util.UnexpectedOutput(
1839 raise util.UnexpectedOutput(
1839 _('Unexpected response from remote server:'), l)
1840 _('Unexpected response from remote server:'), l)
1840 self.ui.status(_('%d files to transfer, %s of data\n') %
1841 self.ui.status(_('%d files to transfer, %s of data\n') %
1841 (total_files, util.bytecount(total_bytes)))
1842 (total_files, util.bytecount(total_bytes)))
1842 start = time.time()
1843 start = time.time()
1843 for i in xrange(total_files):
1844 for i in xrange(total_files):
1844 # XXX doesn't support '\n' or '\r' in filenames
1845 # XXX doesn't support '\n' or '\r' in filenames
1845 l = fp.readline()
1846 l = fp.readline()
1846 try:
1847 try:
1847 name, size = l.split('\0', 1)
1848 name, size = l.split('\0', 1)
1848 size = int(size)
1849 size = int(size)
1849 except ValueError, TypeError:
1850 except ValueError, TypeError:
1850 raise util.UnexpectedOutput(
1851 raise util.UnexpectedOutput(
1851 _('Unexpected response from remote server:'), l)
1852 _('Unexpected response from remote server:'), l)
1852 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
1853 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
1853 ofp = self.sopener(name, 'w')
1854 ofp = self.sopener(name, 'w')
1854 for chunk in util.filechunkiter(fp, limit=size):
1855 for chunk in util.filechunkiter(fp, limit=size):
1855 ofp.write(chunk)
1856 ofp.write(chunk)
1856 ofp.close()
1857 ofp.close()
1857 elapsed = time.time() - start
1858 elapsed = time.time() - start
1858 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1859 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1859 (util.bytecount(total_bytes), elapsed,
1860 (util.bytecount(total_bytes), elapsed,
1860 util.bytecount(total_bytes / elapsed)))
1861 util.bytecount(total_bytes / elapsed)))
1861 self.reload()
1862 self.reload()
1862 return len(self.heads()) + 1
1863 return len(self.heads()) + 1
1863
1864
1864 def clone(self, remote, heads=[], stream=False):
1865 def clone(self, remote, heads=[], stream=False):
1865 '''clone remote repository.
1866 '''clone remote repository.
1866
1867
1867 keyword arguments:
1868 keyword arguments:
1868 heads: list of revs to clone (forces use of pull)
1869 heads: list of revs to clone (forces use of pull)
1869 stream: use streaming clone if possible'''
1870 stream: use streaming clone if possible'''
1870
1871
1871 # now, all clients that can request uncompressed clones can
1872 # now, all clients that can request uncompressed clones can
1872 # read repo formats supported by all servers that can serve
1873 # read repo formats supported by all servers that can serve
1873 # them.
1874 # them.
1874
1875
1875 # if revlog format changes, client will have to check version
1876 # if revlog format changes, client will have to check version
1876 # and format flags on "stream" capability, and use
1877 # and format flags on "stream" capability, and use
1877 # uncompressed only if compatible.
1878 # uncompressed only if compatible.
1878
1879
1879 if stream and not heads and remote.capable('stream'):
1880 if stream and not heads and remote.capable('stream'):
1880 return self.stream_in(remote)
1881 return self.stream_in(remote)
1881 return self.pull(remote, heads)
1882 return self.pull(remote, heads)
1882
1883
1883 # used to avoid circular references so destructors work
1884 # used to avoid circular references so destructors work
1884 def aftertrans(base):
1885 def aftertrans(base):
1885 p = base
1886 p = base
1886 def a():
1887 def a():
1887 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
1888 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
1888 util.rename(os.path.join(p, "journal.dirstate"),
1889 util.rename(os.path.join(p, "journal.dirstate"),
1889 os.path.join(p, "undo.dirstate"))
1890 os.path.join(p, "undo.dirstate"))
1890 return a
1891 return a
1891
1892
1892 def instance(ui, path, create):
1893 def instance(ui, path, create):
1893 return localrepository(ui, util.drop_scheme('file', path), create)
1894 return localrepository(ui, util.drop_scheme('file', path), create)
1894
1895
1895 def islocal(path):
1896 def islocal(path):
1896 return True
1897 return True
@@ -1,443 +1,442 b''
1 # ui.py - user interface bits for mercurial
1 # ui.py - user interface bits for mercurial
2 #
2 #
3 # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from i18n import gettext as _
8 from i18n import gettext as _
9 from demandload import *
9 from demandload import *
10 demandload(globals(), "errno getpass os re socket sys tempfile")
10 demandload(globals(), "errno getpass os re socket sys tempfile")
11 demandload(globals(), "ConfigParser traceback util")
11 demandload(globals(), "ConfigParser traceback util")
12
12
13 def dupconfig(orig):
13 def dupconfig(orig):
14 new = util.configparser(orig.defaults())
14 new = util.configparser(orig.defaults())
15 updateconfig(orig, new)
15 updateconfig(orig, new)
16 return new
16 return new
17
17
18 def updateconfig(source, dest, sections=None):
18 def updateconfig(source, dest, sections=None):
19 if not sections:
19 if not sections:
20 sections = source.sections()
20 sections = source.sections()
21 for section in sections:
21 for section in sections:
22 if not dest.has_section(section):
22 if not dest.has_section(section):
23 dest.add_section(section)
23 dest.add_section(section)
24 for name, value in source.items(section, raw=True):
24 for name, value in source.items(section, raw=True):
25 dest.set(section, name, value)
25 dest.set(section, name, value)
26
26
27 class ui(object):
27 class ui(object):
28 def __init__(self, verbose=False, debug=False, quiet=False,
28 def __init__(self, verbose=False, debug=False, quiet=False,
29 interactive=True, traceback=False, report_untrusted=True,
29 interactive=True, traceback=False, report_untrusted=True,
30 parentui=None):
30 parentui=None):
31 self.overlay = None
31 self.overlay = None
32 if parentui is None:
32 if parentui is None:
33 # this is the parent of all ui children
33 # this is the parent of all ui children
34 self.parentui = None
34 self.parentui = None
35 self.readhooks = []
35 self.readhooks = []
36 self.quiet = quiet
36 self.quiet = quiet
37 self.verbose = verbose
37 self.verbose = verbose
38 self.debugflag = debug
38 self.debugflag = debug
39 self.interactive = interactive
39 self.interactive = interactive
40 self.traceback = traceback
40 self.traceback = traceback
41 self.report_untrusted = report_untrusted
41 self.report_untrusted = report_untrusted
42 self.trusted_users = {}
42 self.trusted_users = {}
43 self.trusted_groups = {}
43 self.trusted_groups = {}
44 # if ucdata is not None, its keys must be a superset of cdata's
44 # if ucdata is not None, its keys must be a superset of cdata's
45 self.cdata = util.configparser()
45 self.cdata = util.configparser()
46 self.ucdata = None
46 self.ucdata = None
47 # we always trust global config files
47 # we always trust global config files
48 self.check_trusted = False
48 self.check_trusted = False
49 self.readconfig(util.rcpath())
49 self.readconfig(util.rcpath())
50 self.check_trusted = True
50 self.check_trusted = True
51 self.updateopts(verbose, debug, quiet, interactive)
51 self.updateopts(verbose, debug, quiet, interactive)
52 else:
52 else:
53 # parentui may point to an ui object which is already a child
53 # parentui may point to an ui object which is already a child
54 self.parentui = parentui.parentui or parentui
54 self.parentui = parentui.parentui or parentui
55 self.readhooks = self.parentui.readhooks[:]
55 self.readhooks = self.parentui.readhooks[:]
56 self.trusted_users = parentui.trusted_users.copy()
56 self.trusted_users = parentui.trusted_users.copy()
57 self.trusted_groups = parentui.trusted_groups.copy()
57 self.trusted_groups = parentui.trusted_groups.copy()
58 self.cdata = dupconfig(self.parentui.cdata)
58 self.cdata = dupconfig(self.parentui.cdata)
59 if self.parentui.ucdata:
59 if self.parentui.ucdata:
60 self.ucdata = dupconfig(self.parentui.ucdata)
60 self.ucdata = dupconfig(self.parentui.ucdata)
61 if self.parentui.overlay:
61 if self.parentui.overlay:
62 self.overlay = dupconfig(self.parentui.overlay)
62 self.overlay = dupconfig(self.parentui.overlay)
63
63
64 def __getattr__(self, key):
64 def __getattr__(self, key):
65 return getattr(self.parentui, key)
65 return getattr(self.parentui, key)
66
66
67 def updateopts(self, verbose=False, debug=False, quiet=False,
67 def updateopts(self, verbose=False, debug=False, quiet=False,
68 interactive=True, traceback=False, config=[]):
68 interactive=True, traceback=False, config=[]):
69 for section, name, value in config:
69 for section, name, value in config:
70 self.setconfig(section, name, value)
70 self.setconfig(section, name, value)
71
71
72 if quiet or verbose or debug:
72 if quiet or verbose or debug:
73 self.setconfig('ui', 'quiet', str(bool(quiet)))
73 self.setconfig('ui', 'quiet', str(bool(quiet)))
74 self.setconfig('ui', 'verbose', str(bool(verbose)))
74 self.setconfig('ui', 'verbose', str(bool(verbose)))
75 self.setconfig('ui', 'debug', str(bool(debug)))
75 self.setconfig('ui', 'debug', str(bool(debug)))
76
76
77 self.verbosity_constraints()
77 self.verbosity_constraints()
78
78
79 if not interactive:
79 if not interactive:
80 self.setconfig('ui', 'interactive', 'False')
80 self.setconfig('ui', 'interactive', 'False')
81 self.interactive = False
81 self.interactive = False
82
82
83 self.traceback = self.traceback or traceback
83 self.traceback = self.traceback or traceback
84
84
85 def verbosity_constraints(self):
85 def verbosity_constraints(self):
86 self.quiet = self.configbool('ui', 'quiet')
86 self.quiet = self.configbool('ui', 'quiet')
87 self.verbose = self.configbool('ui', 'verbose')
87 self.verbose = self.configbool('ui', 'verbose')
88 self.debugflag = self.configbool('ui', 'debug')
88 self.debugflag = self.configbool('ui', 'debug')
89
89
90 if self.debugflag:
90 if self.debugflag:
91 self.verbose = True
91 self.verbose = True
92 self.quiet = False
92 self.quiet = False
93 elif self.verbose and self.quiet:
93 elif self.verbose and self.quiet:
94 self.quiet = self.verbose = False
94 self.quiet = self.verbose = False
95
95
96 def _is_trusted(self, fp, f, warn=True):
96 def _is_trusted(self, fp, f, warn=True):
97 if not self.check_trusted:
97 if not self.check_trusted:
98 return True
98 return True
99 st = util.fstat(fp)
99 st = util.fstat(fp)
100 if util.isowner(fp, st):
100 if util.isowner(fp, st):
101 return True
101 return True
102 tusers = self.trusted_users
102 tusers = self.trusted_users
103 tgroups = self.trusted_groups
103 tgroups = self.trusted_groups
104 if not tusers:
104 if not tusers:
105 user = util.username()
105 user = util.username()
106 if user is not None:
106 if user is not None:
107 self.trusted_users[user] = 1
107 self.trusted_users[user] = 1
108 self.fixconfig(section='trusted')
108 self.fixconfig(section='trusted')
109 if (tusers or tgroups) and '*' not in tusers and '*' not in tgroups:
109 if (tusers or tgroups) and '*' not in tusers and '*' not in tgroups:
110 user = util.username(st.st_uid)
110 user = util.username(st.st_uid)
111 group = util.groupname(st.st_gid)
111 group = util.groupname(st.st_gid)
112 if user not in tusers and group not in tgroups:
112 if user not in tusers and group not in tgroups:
113 if warn and self.report_untrusted:
113 if warn and self.report_untrusted:
114 self.warn(_('Not trusting file %s from untrusted '
114 self.warn(_('Not trusting file %s from untrusted '
115 'user %s, group %s\n') % (f, user, group))
115 'user %s, group %s\n') % (f, user, group))
116 return False
116 return False
117 return True
117 return True
118
118
119 def readconfig(self, fn, root=None):
119 def readconfig(self, fn, root=None):
120 if isinstance(fn, basestring):
120 if isinstance(fn, basestring):
121 fn = [fn]
121 fn = [fn]
122 for f in fn:
122 for f in fn:
123 try:
123 try:
124 fp = open(f)
124 fp = open(f)
125 except IOError:
125 except IOError:
126 continue
126 continue
127 cdata = self.cdata
127 cdata = self.cdata
128 trusted = self._is_trusted(fp, f)
128 trusted = self._is_trusted(fp, f)
129 if not trusted:
129 if not trusted:
130 if self.ucdata is None:
130 if self.ucdata is None:
131 self.ucdata = dupconfig(self.cdata)
131 self.ucdata = dupconfig(self.cdata)
132 cdata = self.ucdata
132 cdata = self.ucdata
133 elif self.ucdata is not None:
133 elif self.ucdata is not None:
134 # use a separate configparser, so that we don't accidentally
134 # use a separate configparser, so that we don't accidentally
135 # override ucdata settings later on.
135 # override ucdata settings later on.
136 cdata = util.configparser()
136 cdata = util.configparser()
137
137
138 try:
138 try:
139 cdata.readfp(fp, f)
139 cdata.readfp(fp, f)
140 except ConfigParser.ParsingError, inst:
140 except ConfigParser.ParsingError, inst:
141 msg = _("Failed to parse %s\n%s") % (f, inst)
141 msg = _("Failed to parse %s\n%s") % (f, inst)
142 if trusted:
142 if trusted:
143 raise util.Abort(msg)
143 raise util.Abort(msg)
144 self.warn(_("Ignored: %s\n") % msg)
144 self.warn(_("Ignored: %s\n") % msg)
145
145
146 if trusted:
146 if trusted:
147 if cdata != self.cdata:
147 if cdata != self.cdata:
148 updateconfig(cdata, self.cdata)
148 updateconfig(cdata, self.cdata)
149 if self.ucdata is not None:
149 if self.ucdata is not None:
150 updateconfig(cdata, self.ucdata)
150 updateconfig(cdata, self.ucdata)
151 # override data from config files with data set with ui.setconfig
151 # override data from config files with data set with ui.setconfig
152 if self.overlay:
152 if self.overlay:
153 updateconfig(self.overlay, self.cdata)
153 updateconfig(self.overlay, self.cdata)
154 if root is None:
154 if root is None:
155 root = os.path.expanduser('~')
155 root = os.path.expanduser('~')
156 self.fixconfig(root=root)
156 self.fixconfig(root=root)
157 for hook in self.readhooks:
157 for hook in self.readhooks:
158 hook(self)
158 hook(self)
159
159
160 def addreadhook(self, hook):
160 def addreadhook(self, hook):
161 self.readhooks.append(hook)
161 self.readhooks.append(hook)
162
162
163 def readsections(self, filename, *sections):
163 def readsections(self, filename, *sections):
164 """Read filename and add only the specified sections to the config data
164 """Read filename and add only the specified sections to the config data
165
165
166 The settings are added to the trusted config data.
166 The settings are added to the trusted config data.
167 """
167 """
168 if not sections:
168 if not sections:
169 return
169 return
170
170
171 cdata = util.configparser()
171 cdata = util.configparser()
172 try:
172 try:
173 cdata.read(filename)
173 cdata.read(filename)
174 except ConfigParser.ParsingError, inst:
174 except ConfigParser.ParsingError, inst:
175 raise util.Abort(_("failed to parse %s\n%s") % (filename,
175 raise util.Abort(_("failed to parse %s\n%s") % (filename,
176 inst))
176 inst))
177
177
178 for section in sections:
178 for section in sections:
179 if not cdata.has_section(section):
179 if not cdata.has_section(section):
180 cdata.add_section(section)
180 cdata.add_section(section)
181
181
182 updateconfig(cdata, self.cdata, sections)
182 updateconfig(cdata, self.cdata, sections)
183 if self.ucdata:
183 if self.ucdata:
184 updateconfig(cdata, self.ucdata, sections)
184 updateconfig(cdata, self.ucdata, sections)
185
185
186 def fixconfig(self, section=None, name=None, value=None, root=None):
186 def fixconfig(self, section=None, name=None, value=None, root=None):
187 # translate paths relative to root (or home) into absolute paths
187 # translate paths relative to root (or home) into absolute paths
188 if section is None or section == 'paths':
188 if section is None or section == 'paths':
189 if root is None:
189 if root is None:
190 root = os.getcwd()
190 root = os.getcwd()
191 items = section and [(name, value)] or []
191 items = section and [(name, value)] or []
192 for cdata in self.cdata, self.ucdata, self.overlay:
192 for cdata in self.cdata, self.ucdata, self.overlay:
193 if not cdata: continue
193 if not cdata: continue
194 if not items and cdata.has_section('paths'):
194 if not items and cdata.has_section('paths'):
195 pathsitems = cdata.items('paths')
195 pathsitems = cdata.items('paths')
196 else:
196 else:
197 pathsitems = items
197 pathsitems = items
198 for n, path in pathsitems:
198 for n, path in pathsitems:
199 if path and "://" not in path and not os.path.isabs(path):
199 if path and "://" not in path and not os.path.isabs(path):
200 cdata.set("paths", n, os.path.join(root, path))
200 cdata.set("paths", n, os.path.join(root, path))
201
201
202 # update quiet/verbose/debug and interactive status
202 # update quiet/verbose/debug and interactive status
203 if section is None or section == 'ui':
203 if section is None or section == 'ui':
204 if name is None or name in ('quiet', 'verbose', 'debug'):
204 if name is None or name in ('quiet', 'verbose', 'debug'):
205 self.verbosity_constraints()
205 self.verbosity_constraints()
206
206
207 if name is None or name == 'interactive':
207 if name is None or name == 'interactive':
208 self.interactive = self.configbool("ui", "interactive", True)
208 self.interactive = self.configbool("ui", "interactive", True)
209
209
210 # update trust information
210 # update trust information
211 if (section is None or section == 'trusted') and self.trusted_users:
211 if (section is None or section == 'trusted') and self.trusted_users:
212 for user in self.configlist('trusted', 'users'):
212 for user in self.configlist('trusted', 'users'):
213 self.trusted_users[user] = 1
213 self.trusted_users[user] = 1
214 for group in self.configlist('trusted', 'groups'):
214 for group in self.configlist('trusted', 'groups'):
215 self.trusted_groups[group] = 1
215 self.trusted_groups[group] = 1
216
216
217 def setconfig(self, section, name, value):
217 def setconfig(self, section, name, value):
218 if not self.overlay:
218 if not self.overlay:
219 self.overlay = util.configparser()
219 self.overlay = util.configparser()
220 for cdata in (self.overlay, self.cdata, self.ucdata):
220 for cdata in (self.overlay, self.cdata, self.ucdata):
221 if not cdata: continue
221 if not cdata: continue
222 if not cdata.has_section(section):
222 if not cdata.has_section(section):
223 cdata.add_section(section)
223 cdata.add_section(section)
224 cdata.set(section, name, value)
224 cdata.set(section, name, value)
225 self.fixconfig(section, name, value)
225 self.fixconfig(section, name, value)
226
226
227 def _get_cdata(self, untrusted):
227 def _get_cdata(self, untrusted):
228 if untrusted and self.ucdata:
228 if untrusted and self.ucdata:
229 return self.ucdata
229 return self.ucdata
230 return self.cdata
230 return self.cdata
231
231
232 def _config(self, section, name, default, funcname, untrusted, abort):
232 def _config(self, section, name, default, funcname, untrusted, abort):
233 cdata = self._get_cdata(untrusted)
233 cdata = self._get_cdata(untrusted)
234 if cdata.has_option(section, name):
234 if cdata.has_option(section, name):
235 try:
235 try:
236 func = getattr(cdata, funcname)
236 func = getattr(cdata, funcname)
237 return func(section, name)
237 return func(section, name)
238 except ConfigParser.InterpolationError, inst:
238 except ConfigParser.InterpolationError, inst:
239 msg = _("Error in configuration section [%s] "
239 msg = _("Error in configuration section [%s] "
240 "parameter '%s':\n%s") % (section, name, inst)
240 "parameter '%s':\n%s") % (section, name, inst)
241 if abort:
241 if abort:
242 raise util.Abort(msg)
242 raise util.Abort(msg)
243 self.warn(_("Ignored: %s\n") % msg)
243 self.warn(_("Ignored: %s\n") % msg)
244 return default
244 return default
245
245
246 def _configcommon(self, section, name, default, funcname, untrusted):
246 def _configcommon(self, section, name, default, funcname, untrusted):
247 value = self._config(section, name, default, funcname,
247 value = self._config(section, name, default, funcname,
248 untrusted, abort=True)
248 untrusted, abort=True)
249 if self.debugflag and not untrusted and self.ucdata:
249 if self.debugflag and not untrusted and self.ucdata:
250 uvalue = self._config(section, name, None, funcname,
250 uvalue = self._config(section, name, None, funcname,
251 untrusted=True, abort=False)
251 untrusted=True, abort=False)
252 if uvalue is not None and uvalue != value:
252 if uvalue is not None and uvalue != value:
253 self.warn(_("Ignoring untrusted configuration option "
253 self.warn(_("Ignoring untrusted configuration option "
254 "%s.%s = %s\n") % (section, name, uvalue))
254 "%s.%s = %s\n") % (section, name, uvalue))
255 return value
255 return value
256
256
257 def config(self, section, name, default=None, untrusted=False):
257 def config(self, section, name, default=None, untrusted=False):
258 return self._configcommon(section, name, default, 'get', untrusted)
258 return self._configcommon(section, name, default, 'get', untrusted)
259
259
260 def configbool(self, section, name, default=False, untrusted=False):
260 def configbool(self, section, name, default=False, untrusted=False):
261 return self._configcommon(section, name, default, 'getboolean',
261 return self._configcommon(section, name, default, 'getboolean',
262 untrusted)
262 untrusted)
263
263
264 def configlist(self, section, name, default=None, untrusted=False):
264 def configlist(self, section, name, default=None, untrusted=False):
265 """Return a list of comma/space separated strings"""
265 """Return a list of comma/space separated strings"""
266 result = self.config(section, name, untrusted=untrusted)
266 result = self.config(section, name, untrusted=untrusted)
267 if result is None:
267 if result is None:
268 result = default or []
268 result = default or []
269 if isinstance(result, basestring):
269 if isinstance(result, basestring):
270 result = result.replace(",", " ").split()
270 result = result.replace(",", " ").split()
271 return result
271 return result
272
272
273 def has_config(self, section, untrusted=False):
273 def has_config(self, section, untrusted=False):
274 '''tell whether section exists in config.'''
274 '''tell whether section exists in config.'''
275 cdata = self._get_cdata(untrusted)
275 cdata = self._get_cdata(untrusted)
276 return cdata.has_section(section)
276 return cdata.has_section(section)
277
277
278 def _configitems(self, section, untrusted, abort):
278 def _configitems(self, section, untrusted, abort):
279 items = {}
279 items = {}
280 cdata = self._get_cdata(untrusted)
280 cdata = self._get_cdata(untrusted)
281 if cdata.has_section(section):
281 if cdata.has_section(section):
282 try:
282 try:
283 items.update(dict(cdata.items(section)))
283 items.update(dict(cdata.items(section)))
284 except ConfigParser.InterpolationError, inst:
284 except ConfigParser.InterpolationError, inst:
285 msg = _("Error in configuration section [%s]:\n"
285 msg = _("Error in configuration section [%s]:\n"
286 "%s") % (section, inst)
286 "%s") % (section, inst)
287 if abort:
287 if abort:
288 raise util.Abort(msg)
288 raise util.Abort(msg)
289 self.warn(_("Ignored: %s\n") % msg)
289 self.warn(_("Ignored: %s\n") % msg)
290 return items
290 return items
291
291
292 def configitems(self, section, untrusted=False):
292 def configitems(self, section, untrusted=False):
293 items = self._configitems(section, untrusted=untrusted, abort=True)
293 items = self._configitems(section, untrusted=untrusted, abort=True)
294 if self.debugflag and not untrusted and self.ucdata:
294 if self.debugflag and not untrusted and self.ucdata:
295 uitems = self._configitems(section, untrusted=True, abort=False)
295 uitems = self._configitems(section, untrusted=True, abort=False)
296 keys = uitems.keys()
296 keys = uitems.keys()
297 keys.sort()
297 keys.sort()
298 for k in keys:
298 for k in keys:
299 if uitems[k] != items.get(k):
299 if uitems[k] != items.get(k):
300 self.warn(_("Ignoring untrusted configuration option "
300 self.warn(_("Ignoring untrusted configuration option "
301 "%s.%s = %s\n") % (section, k, uitems[k]))
301 "%s.%s = %s\n") % (section, k, uitems[k]))
302 x = items.items()
302 x = items.items()
303 x.sort()
303 x.sort()
304 return x
304 return x
305
305
306 def walkconfig(self, untrusted=False):
306 def walkconfig(self, untrusted=False):
307 cdata = self._get_cdata(untrusted)
307 cdata = self._get_cdata(untrusted)
308 sections = cdata.sections()
308 sections = cdata.sections()
309 sections.sort()
309 sections.sort()
310 for section in sections:
310 for section in sections:
311 for name, value in self.configitems(section, untrusted):
311 for name, value in self.configitems(section, untrusted):
312 yield section, name, value.replace('\n', '\\n')
312 yield section, name, value.replace('\n', '\\n')
313
313
314 def extensions(self):
314 def extensions(self):
315 result = self.configitems("extensions")
315 result = self.configitems("extensions")
316 for i, (key, value) in enumerate(result):
316 for i, (key, value) in enumerate(result):
317 if value:
317 if value:
318 result[i] = (key, os.path.expanduser(value))
318 result[i] = (key, os.path.expanduser(value))
319 return result
319 return result
320
320
321 def hgignorefiles(self):
321 def hgignorefiles(self):
322 result = []
322 result = []
323 for key, value in self.configitems("ui"):
323 for key, value in self.configitems("ui"):
324 if key == 'ignore' or key.startswith('ignore.'):
324 if key == 'ignore' or key.startswith('ignore.'):
325 result.append(os.path.expanduser(value))
325 result.append(os.path.expanduser(value))
326 return result
326 return result
327
327
328 def configrevlog(self):
328 def configrevlog(self):
329 result = {}
329 result = {}
330 for key, value in self.configitems("revlog"):
330 for key, value in self.configitems("revlog"):
331 result[key.lower()] = value
331 result[key.lower()] = value
332 return result
332 return result
333
333
334 def username(self):
334 def username(self):
335 """Return default username to be used in commits.
335 """Return default username to be used in commits.
336
336
337 Searched in this order: $HGUSER, [ui] section of hgrcs, $EMAIL
337 Searched in this order: $HGUSER, [ui] section of hgrcs, $EMAIL
338 and stop searching if one of these is set.
338 and stop searching if one of these is set.
339 Abort if no username is found, to force specifying the commit user
339 If not found, use ($LOGNAME or $USER or $LNAME or
340 with line option or repo hgrc.
340 $USERNAME) +"@full.hostname".
341 """
341 """
342 user = os.environ.get("HGUSER")
342 user = os.environ.get("HGUSER")
343 if user is None:
343 if user is None:
344 user = self.config("ui", "username")
344 user = self.config("ui", "username")
345 if user is None:
345 if user is None:
346 user = os.environ.get("EMAIL")
346 user = os.environ.get("EMAIL")
347 if not user:
347 if not user:
348 self.status(_("Please choose a commit username to be recorded "
348 try:
349 "in the changelog via\ncommand line option "
349 user = '%s@%s' % (util.getuser(), socket.getfqdn())
350 '(-u "First Last <email@example.com>"), in the\n'
350 except KeyError:
351 "configuration files (hgrc), or by setting the "
351 raise util.Abort(_("Please specify a username."))
352 "EMAIL environment variable.\n\n"))
352 self.warn(_("No username found, using '%s' instead\n" % user))
353 raise util.Abort(_("No commit username specified!"))
354 return user
353 return user
355
354
356 def shortuser(self, user):
355 def shortuser(self, user):
357 """Return a short representation of a user name or email address."""
356 """Return a short representation of a user name or email address."""
358 if not self.verbose: user = util.shortuser(user)
357 if not self.verbose: user = util.shortuser(user)
359 return user
358 return user
360
359
361 def expandpath(self, loc, default=None):
360 def expandpath(self, loc, default=None):
362 """Return repository location relative to cwd or from [paths]"""
361 """Return repository location relative to cwd or from [paths]"""
363 if "://" in loc or os.path.isdir(loc):
362 if "://" in loc or os.path.isdir(loc):
364 return loc
363 return loc
365
364
366 path = self.config("paths", loc)
365 path = self.config("paths", loc)
367 if not path and default is not None:
366 if not path and default is not None:
368 path = self.config("paths", default)
367 path = self.config("paths", default)
369 return path or loc
368 return path or loc
370
369
371 def write(self, *args):
370 def write(self, *args):
372 for a in args:
371 for a in args:
373 sys.stdout.write(str(a))
372 sys.stdout.write(str(a))
374
373
375 def write_err(self, *args):
374 def write_err(self, *args):
376 try:
375 try:
377 if not sys.stdout.closed: sys.stdout.flush()
376 if not sys.stdout.closed: sys.stdout.flush()
378 for a in args:
377 for a in args:
379 sys.stderr.write(str(a))
378 sys.stderr.write(str(a))
380 except IOError, inst:
379 except IOError, inst:
381 if inst.errno != errno.EPIPE:
380 if inst.errno != errno.EPIPE:
382 raise
381 raise
383
382
384 def flush(self):
383 def flush(self):
385 try: sys.stdout.flush()
384 try: sys.stdout.flush()
386 except: pass
385 except: pass
387 try: sys.stderr.flush()
386 try: sys.stderr.flush()
388 except: pass
387 except: pass
389
388
390 def readline(self):
389 def readline(self):
391 return sys.stdin.readline()[:-1]
390 return sys.stdin.readline()[:-1]
392 def prompt(self, msg, pat=None, default="y"):
391 def prompt(self, msg, pat=None, default="y"):
393 if not self.interactive: return default
392 if not self.interactive: return default
394 while 1:
393 while 1:
395 self.write(msg, " ")
394 self.write(msg, " ")
396 r = self.readline()
395 r = self.readline()
397 if not pat or re.match(pat, r):
396 if not pat or re.match(pat, r):
398 return r
397 return r
399 else:
398 else:
400 self.write(_("unrecognized response\n"))
399 self.write(_("unrecognized response\n"))
401 def getpass(self, prompt=None, default=None):
400 def getpass(self, prompt=None, default=None):
402 if not self.interactive: return default
401 if not self.interactive: return default
403 return getpass.getpass(prompt or _('password: '))
402 return getpass.getpass(prompt or _('password: '))
404 def status(self, *msg):
403 def status(self, *msg):
405 if not self.quiet: self.write(*msg)
404 if not self.quiet: self.write(*msg)
406 def warn(self, *msg):
405 def warn(self, *msg):
407 self.write_err(*msg)
406 self.write_err(*msg)
408 def note(self, *msg):
407 def note(self, *msg):
409 if self.verbose: self.write(*msg)
408 if self.verbose: self.write(*msg)
410 def debug(self, *msg):
409 def debug(self, *msg):
411 if self.debugflag: self.write(*msg)
410 if self.debugflag: self.write(*msg)
412 def edit(self, text, user):
411 def edit(self, text, user):
413 (fd, name) = tempfile.mkstemp(prefix="hg-editor-", suffix=".txt",
412 (fd, name) = tempfile.mkstemp(prefix="hg-editor-", suffix=".txt",
414 text=True)
413 text=True)
415 try:
414 try:
416 f = os.fdopen(fd, "w")
415 f = os.fdopen(fd, "w")
417 f.write(text)
416 f.write(text)
418 f.close()
417 f.close()
419
418
420 editor = (os.environ.get("HGEDITOR") or
419 editor = (os.environ.get("HGEDITOR") or
421 self.config("ui", "editor") or
420 self.config("ui", "editor") or
422 os.environ.get("EDITOR", "vi"))
421 os.environ.get("EDITOR", "vi"))
423
422
424 util.system("%s \"%s\"" % (editor, name),
423 util.system("%s \"%s\"" % (editor, name),
425 environ={'HGUSER': user},
424 environ={'HGUSER': user},
426 onerr=util.Abort, errprefix=_("edit failed"))
425 onerr=util.Abort, errprefix=_("edit failed"))
427
426
428 f = open(name)
427 f = open(name)
429 t = f.read()
428 t = f.read()
430 f.close()
429 f.close()
431 t = re.sub("(?m)^HG:.*\n", "", t)
430 t = re.sub("(?m)^HG:.*\n", "", t)
432 finally:
431 finally:
433 os.unlink(name)
432 os.unlink(name)
434
433
435 return t
434 return t
436
435
437 def print_exc(self):
436 def print_exc(self):
438 '''print exception traceback if traceback printing enabled.
437 '''print exception traceback if traceback printing enabled.
439 only to call in exception handler. returns true if traceback
438 only to call in exception handler. returns true if traceback
440 printed.'''
439 printed.'''
441 if self.traceback:
440 if self.traceback:
442 traceback.print_exc()
441 traceback.print_exc()
443 return self.traceback
442 return self.traceback
@@ -1,1085 +1,1099 b''
1 """
1 """
2 util.py - Mercurial utility functions and platform specfic implementations
2 util.py - Mercurial utility functions and platform specfic implementations
3
3
4 Copyright 2005 K. Thananchayan <thananck@yahoo.com>
4 Copyright 2005 K. Thananchayan <thananck@yahoo.com>
5 Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
5 Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
6 Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
6 Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
7
7
8 This software may be used and distributed according to the terms
8 This software may be used and distributed according to the terms
9 of the GNU General Public License, incorporated herein by reference.
9 of the GNU General Public License, incorporated herein by reference.
10
10
11 This contains helper routines that are independent of the SCM core and hide
11 This contains helper routines that are independent of the SCM core and hide
12 platform-specific details from the core.
12 platform-specific details from the core.
13 """
13 """
14
14
15 from i18n import gettext as _
15 from i18n import gettext as _
16 from demandload import *
16 from demandload import *
17 demandload(globals(), "cStringIO errno getpass popen2 re shutil sys tempfile")
17 demandload(globals(), "cStringIO errno getpass popen2 re shutil sys tempfile")
18 demandload(globals(), "os threading time calendar ConfigParser")
18 demandload(globals(), "os threading time calendar ConfigParser")
19
19
20 # used by parsedate
20 # used by parsedate
21 defaultdateformats = ('%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M',
21 defaultdateformats = ('%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M',
22 '%a %b %d %H:%M:%S %Y')
22 '%a %b %d %H:%M:%S %Y')
23
23
24 class SignalInterrupt(Exception):
24 class SignalInterrupt(Exception):
25 """Exception raised on SIGTERM and SIGHUP."""
25 """Exception raised on SIGTERM and SIGHUP."""
26
26
27 # like SafeConfigParser but with case-sensitive keys
27 # like SafeConfigParser but with case-sensitive keys
28 class configparser(ConfigParser.SafeConfigParser):
28 class configparser(ConfigParser.SafeConfigParser):
29 def optionxform(self, optionstr):
29 def optionxform(self, optionstr):
30 return optionstr
30 return optionstr
31
31
32 def cachefunc(func):
32 def cachefunc(func):
33 '''cache the result of function calls'''
33 '''cache the result of function calls'''
34 # XXX doesn't handle keywords args
34 # XXX doesn't handle keywords args
35 cache = {}
35 cache = {}
36 if func.func_code.co_argcount == 1:
36 if func.func_code.co_argcount == 1:
37 # we gain a small amount of time because
37 # we gain a small amount of time because
38 # we don't need to pack/unpack the list
38 # we don't need to pack/unpack the list
39 def f(arg):
39 def f(arg):
40 if arg not in cache:
40 if arg not in cache:
41 cache[arg] = func(arg)
41 cache[arg] = func(arg)
42 return cache[arg]
42 return cache[arg]
43 else:
43 else:
44 def f(*args):
44 def f(*args):
45 if args not in cache:
45 if args not in cache:
46 cache[args] = func(*args)
46 cache[args] = func(*args)
47 return cache[args]
47 return cache[args]
48
48
49 return f
49 return f
50
50
51 def pipefilter(s, cmd):
51 def pipefilter(s, cmd):
52 '''filter string S through command CMD, returning its output'''
52 '''filter string S through command CMD, returning its output'''
53 (pout, pin) = popen2.popen2(cmd, -1, 'b')
53 (pout, pin) = popen2.popen2(cmd, -1, 'b')
54 def writer():
54 def writer():
55 try:
55 try:
56 pin.write(s)
56 pin.write(s)
57 pin.close()
57 pin.close()
58 except IOError, inst:
58 except IOError, inst:
59 if inst.errno != errno.EPIPE:
59 if inst.errno != errno.EPIPE:
60 raise
60 raise
61
61
62 # we should use select instead on UNIX, but this will work on most
62 # we should use select instead on UNIX, but this will work on most
63 # systems, including Windows
63 # systems, including Windows
64 w = threading.Thread(target=writer)
64 w = threading.Thread(target=writer)
65 w.start()
65 w.start()
66 f = pout.read()
66 f = pout.read()
67 pout.close()
67 pout.close()
68 w.join()
68 w.join()
69 return f
69 return f
70
70
71 def tempfilter(s, cmd):
71 def tempfilter(s, cmd):
72 '''filter string S through a pair of temporary files with CMD.
72 '''filter string S through a pair of temporary files with CMD.
73 CMD is used as a template to create the real command to be run,
73 CMD is used as a template to create the real command to be run,
74 with the strings INFILE and OUTFILE replaced by the real names of
74 with the strings INFILE and OUTFILE replaced by the real names of
75 the temporary files generated.'''
75 the temporary files generated.'''
76 inname, outname = None, None
76 inname, outname = None, None
77 try:
77 try:
78 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
78 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
79 fp = os.fdopen(infd, 'wb')
79 fp = os.fdopen(infd, 'wb')
80 fp.write(s)
80 fp.write(s)
81 fp.close()
81 fp.close()
82 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
82 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
83 os.close(outfd)
83 os.close(outfd)
84 cmd = cmd.replace('INFILE', inname)
84 cmd = cmd.replace('INFILE', inname)
85 cmd = cmd.replace('OUTFILE', outname)
85 cmd = cmd.replace('OUTFILE', outname)
86 code = os.system(cmd)
86 code = os.system(cmd)
87 if code: raise Abort(_("command '%s' failed: %s") %
87 if code: raise Abort(_("command '%s' failed: %s") %
88 (cmd, explain_exit(code)))
88 (cmd, explain_exit(code)))
89 return open(outname, 'rb').read()
89 return open(outname, 'rb').read()
90 finally:
90 finally:
91 try:
91 try:
92 if inname: os.unlink(inname)
92 if inname: os.unlink(inname)
93 except: pass
93 except: pass
94 try:
94 try:
95 if outname: os.unlink(outname)
95 if outname: os.unlink(outname)
96 except: pass
96 except: pass
97
97
98 filtertable = {
98 filtertable = {
99 'tempfile:': tempfilter,
99 'tempfile:': tempfilter,
100 'pipe:': pipefilter,
100 'pipe:': pipefilter,
101 }
101 }
102
102
103 def filter(s, cmd):
103 def filter(s, cmd):
104 "filter a string through a command that transforms its input to its output"
104 "filter a string through a command that transforms its input to its output"
105 for name, fn in filtertable.iteritems():
105 for name, fn in filtertable.iteritems():
106 if cmd.startswith(name):
106 if cmd.startswith(name):
107 return fn(s, cmd[len(name):].lstrip())
107 return fn(s, cmd[len(name):].lstrip())
108 return pipefilter(s, cmd)
108 return pipefilter(s, cmd)
109
109
110 def find_in_path(name, path, default=None):
110 def find_in_path(name, path, default=None):
111 '''find name in search path. path can be string (will be split
111 '''find name in search path. path can be string (will be split
112 with os.pathsep), or iterable thing that returns strings. if name
112 with os.pathsep), or iterable thing that returns strings. if name
113 found, return path to name. else return default.'''
113 found, return path to name. else return default.'''
114 if isinstance(path, str):
114 if isinstance(path, str):
115 path = path.split(os.pathsep)
115 path = path.split(os.pathsep)
116 for p in path:
116 for p in path:
117 p_name = os.path.join(p, name)
117 p_name = os.path.join(p, name)
118 if os.path.exists(p_name):
118 if os.path.exists(p_name):
119 return p_name
119 return p_name
120 return default
120 return default
121
121
122 def binary(s):
122 def binary(s):
123 """return true if a string is binary data using diff's heuristic"""
123 """return true if a string is binary data using diff's heuristic"""
124 if s and '\0' in s[:4096]:
124 if s and '\0' in s[:4096]:
125 return True
125 return True
126 return False
126 return False
127
127
128 def unique(g):
128 def unique(g):
129 """return the uniq elements of iterable g"""
129 """return the uniq elements of iterable g"""
130 seen = {}
130 seen = {}
131 l = []
131 l = []
132 for f in g:
132 for f in g:
133 if f not in seen:
133 if f not in seen:
134 seen[f] = 1
134 seen[f] = 1
135 l.append(f)
135 l.append(f)
136 return l
136 return l
137
137
138 class Abort(Exception):
138 class Abort(Exception):
139 """Raised if a command needs to print an error and exit."""
139 """Raised if a command needs to print an error and exit."""
140
140
141 class UnexpectedOutput(Abort):
141 class UnexpectedOutput(Abort):
142 """Raised to print an error with part of output and exit."""
142 """Raised to print an error with part of output and exit."""
143
143
144 def always(fn): return True
144 def always(fn): return True
145 def never(fn): return False
145 def never(fn): return False
146
146
147 def patkind(name, dflt_pat='glob'):
147 def patkind(name, dflt_pat='glob'):
148 """Split a string into an optional pattern kind prefix and the
148 """Split a string into an optional pattern kind prefix and the
149 actual pattern."""
149 actual pattern."""
150 for prefix in 're', 'glob', 'path', 'relglob', 'relpath', 'relre':
150 for prefix in 're', 'glob', 'path', 'relglob', 'relpath', 'relre':
151 if name.startswith(prefix + ':'): return name.split(':', 1)
151 if name.startswith(prefix + ':'): return name.split(':', 1)
152 return dflt_pat, name
152 return dflt_pat, name
153
153
154 def globre(pat, head='^', tail='$'):
154 def globre(pat, head='^', tail='$'):
155 "convert a glob pattern into a regexp"
155 "convert a glob pattern into a regexp"
156 i, n = 0, len(pat)
156 i, n = 0, len(pat)
157 res = ''
157 res = ''
158 group = False
158 group = False
159 def peek(): return i < n and pat[i]
159 def peek(): return i < n and pat[i]
160 while i < n:
160 while i < n:
161 c = pat[i]
161 c = pat[i]
162 i = i+1
162 i = i+1
163 if c == '*':
163 if c == '*':
164 if peek() == '*':
164 if peek() == '*':
165 i += 1
165 i += 1
166 res += '.*'
166 res += '.*'
167 else:
167 else:
168 res += '[^/]*'
168 res += '[^/]*'
169 elif c == '?':
169 elif c == '?':
170 res += '.'
170 res += '.'
171 elif c == '[':
171 elif c == '[':
172 j = i
172 j = i
173 if j < n and pat[j] in '!]':
173 if j < n and pat[j] in '!]':
174 j += 1
174 j += 1
175 while j < n and pat[j] != ']':
175 while j < n and pat[j] != ']':
176 j += 1
176 j += 1
177 if j >= n:
177 if j >= n:
178 res += '\\['
178 res += '\\['
179 else:
179 else:
180 stuff = pat[i:j].replace('\\','\\\\')
180 stuff = pat[i:j].replace('\\','\\\\')
181 i = j + 1
181 i = j + 1
182 if stuff[0] == '!':
182 if stuff[0] == '!':
183 stuff = '^' + stuff[1:]
183 stuff = '^' + stuff[1:]
184 elif stuff[0] == '^':
184 elif stuff[0] == '^':
185 stuff = '\\' + stuff
185 stuff = '\\' + stuff
186 res = '%s[%s]' % (res, stuff)
186 res = '%s[%s]' % (res, stuff)
187 elif c == '{':
187 elif c == '{':
188 group = True
188 group = True
189 res += '(?:'
189 res += '(?:'
190 elif c == '}' and group:
190 elif c == '}' and group:
191 res += ')'
191 res += ')'
192 group = False
192 group = False
193 elif c == ',' and group:
193 elif c == ',' and group:
194 res += '|'
194 res += '|'
195 elif c == '\\':
195 elif c == '\\':
196 p = peek()
196 p = peek()
197 if p:
197 if p:
198 i += 1
198 i += 1
199 res += re.escape(p)
199 res += re.escape(p)
200 else:
200 else:
201 res += re.escape(c)
201 res += re.escape(c)
202 else:
202 else:
203 res += re.escape(c)
203 res += re.escape(c)
204 return head + res + tail
204 return head + res + tail
205
205
206 _globchars = {'[': 1, '{': 1, '*': 1, '?': 1}
206 _globchars = {'[': 1, '{': 1, '*': 1, '?': 1}
207
207
208 def pathto(n1, n2):
208 def pathto(n1, n2):
209 '''return the relative path from one place to another.
209 '''return the relative path from one place to another.
210 n1 should use os.sep to separate directories
210 n1 should use os.sep to separate directories
211 n2 should use "/" to separate directories
211 n2 should use "/" to separate directories
212 returns an os.sep-separated path.
212 returns an os.sep-separated path.
213 '''
213 '''
214 if not n1: return localpath(n2)
214 if not n1: return localpath(n2)
215 a, b = n1.split(os.sep), n2.split('/')
215 a, b = n1.split(os.sep), n2.split('/')
216 a.reverse()
216 a.reverse()
217 b.reverse()
217 b.reverse()
218 while a and b and a[-1] == b[-1]:
218 while a and b and a[-1] == b[-1]:
219 a.pop()
219 a.pop()
220 b.pop()
220 b.pop()
221 b.reverse()
221 b.reverse()
222 return os.sep.join((['..'] * len(a)) + b)
222 return os.sep.join((['..'] * len(a)) + b)
223
223
224 def canonpath(root, cwd, myname):
224 def canonpath(root, cwd, myname):
225 """return the canonical path of myname, given cwd and root"""
225 """return the canonical path of myname, given cwd and root"""
226 if root == os.sep:
226 if root == os.sep:
227 rootsep = os.sep
227 rootsep = os.sep
228 elif root.endswith(os.sep):
228 elif root.endswith(os.sep):
229 rootsep = root
229 rootsep = root
230 else:
230 else:
231 rootsep = root + os.sep
231 rootsep = root + os.sep
232 name = myname
232 name = myname
233 if not os.path.isabs(name):
233 if not os.path.isabs(name):
234 name = os.path.join(root, cwd, name)
234 name = os.path.join(root, cwd, name)
235 name = os.path.normpath(name)
235 name = os.path.normpath(name)
236 if name != rootsep and name.startswith(rootsep):
236 if name != rootsep and name.startswith(rootsep):
237 name = name[len(rootsep):]
237 name = name[len(rootsep):]
238 audit_path(name)
238 audit_path(name)
239 return pconvert(name)
239 return pconvert(name)
240 elif name == root:
240 elif name == root:
241 return ''
241 return ''
242 else:
242 else:
243 # Determine whether `name' is in the hierarchy at or beneath `root',
243 # Determine whether `name' is in the hierarchy at or beneath `root',
244 # by iterating name=dirname(name) until that causes no change (can't
244 # by iterating name=dirname(name) until that causes no change (can't
245 # check name == '/', because that doesn't work on windows). For each
245 # check name == '/', because that doesn't work on windows). For each
246 # `name', compare dev/inode numbers. If they match, the list `rel'
246 # `name', compare dev/inode numbers. If they match, the list `rel'
247 # holds the reversed list of components making up the relative file
247 # holds the reversed list of components making up the relative file
248 # name we want.
248 # name we want.
249 root_st = os.stat(root)
249 root_st = os.stat(root)
250 rel = []
250 rel = []
251 while True:
251 while True:
252 try:
252 try:
253 name_st = os.stat(name)
253 name_st = os.stat(name)
254 except OSError:
254 except OSError:
255 break
255 break
256 if samestat(name_st, root_st):
256 if samestat(name_st, root_st):
257 rel.reverse()
257 rel.reverse()
258 name = os.path.join(*rel)
258 name = os.path.join(*rel)
259 audit_path(name)
259 audit_path(name)
260 return pconvert(name)
260 return pconvert(name)
261 dirname, basename = os.path.split(name)
261 dirname, basename = os.path.split(name)
262 rel.append(basename)
262 rel.append(basename)
263 if dirname == name:
263 if dirname == name:
264 break
264 break
265 name = dirname
265 name = dirname
266
266
267 raise Abort('%s not under root' % myname)
267 raise Abort('%s not under root' % myname)
268
268
269 def matcher(canonroot, cwd='', names=['.'], inc=[], exc=[], head='', src=None):
269 def matcher(canonroot, cwd='', names=['.'], inc=[], exc=[], head='', src=None):
270 return _matcher(canonroot, cwd, names, inc, exc, head, 'glob', src)
270 return _matcher(canonroot, cwd, names, inc, exc, head, 'glob', src)
271
271
272 def cmdmatcher(canonroot, cwd='', names=['.'], inc=[], exc=[], head='', src=None):
272 def cmdmatcher(canonroot, cwd='', names=['.'], inc=[], exc=[], head='', src=None):
273 if os.name == 'nt':
273 if os.name == 'nt':
274 dflt_pat = 'glob'
274 dflt_pat = 'glob'
275 else:
275 else:
276 dflt_pat = 'relpath'
276 dflt_pat = 'relpath'
277 return _matcher(canonroot, cwd, names, inc, exc, head, dflt_pat, src)
277 return _matcher(canonroot, cwd, names, inc, exc, head, dflt_pat, src)
278
278
279 def _matcher(canonroot, cwd, names, inc, exc, head, dflt_pat, src):
279 def _matcher(canonroot, cwd, names, inc, exc, head, dflt_pat, src):
280 """build a function to match a set of file patterns
280 """build a function to match a set of file patterns
281
281
282 arguments:
282 arguments:
283 canonroot - the canonical root of the tree you're matching against
283 canonroot - the canonical root of the tree you're matching against
284 cwd - the current working directory, if relevant
284 cwd - the current working directory, if relevant
285 names - patterns to find
285 names - patterns to find
286 inc - patterns to include
286 inc - patterns to include
287 exc - patterns to exclude
287 exc - patterns to exclude
288 head - a regex to prepend to patterns to control whether a match is rooted
288 head - a regex to prepend to patterns to control whether a match is rooted
289
289
290 a pattern is one of:
290 a pattern is one of:
291 'glob:<rooted glob>'
291 'glob:<rooted glob>'
292 're:<rooted regexp>'
292 're:<rooted regexp>'
293 'path:<rooted path>'
293 'path:<rooted path>'
294 'relglob:<relative glob>'
294 'relglob:<relative glob>'
295 'relpath:<relative path>'
295 'relpath:<relative path>'
296 'relre:<relative regexp>'
296 'relre:<relative regexp>'
297 '<rooted path or regexp>'
297 '<rooted path or regexp>'
298
298
299 returns:
299 returns:
300 a 3-tuple containing
300 a 3-tuple containing
301 - list of explicit non-pattern names passed in
301 - list of explicit non-pattern names passed in
302 - a bool match(filename) function
302 - a bool match(filename) function
303 - a bool indicating if any patterns were passed in
303 - a bool indicating if any patterns were passed in
304
304
305 todo:
305 todo:
306 make head regex a rooted bool
306 make head regex a rooted bool
307 """
307 """
308
308
309 def contains_glob(name):
309 def contains_glob(name):
310 for c in name:
310 for c in name:
311 if c in _globchars: return True
311 if c in _globchars: return True
312 return False
312 return False
313
313
314 def regex(kind, name, tail):
314 def regex(kind, name, tail):
315 '''convert a pattern into a regular expression'''
315 '''convert a pattern into a regular expression'''
316 if kind == 're':
316 if kind == 're':
317 return name
317 return name
318 elif kind == 'path':
318 elif kind == 'path':
319 return '^' + re.escape(name) + '(?:/|$)'
319 return '^' + re.escape(name) + '(?:/|$)'
320 elif kind == 'relglob':
320 elif kind == 'relglob':
321 return head + globre(name, '(?:|.*/)', tail)
321 return head + globre(name, '(?:|.*/)', tail)
322 elif kind == 'relpath':
322 elif kind == 'relpath':
323 return head + re.escape(name) + tail
323 return head + re.escape(name) + tail
324 elif kind == 'relre':
324 elif kind == 'relre':
325 if name.startswith('^'):
325 if name.startswith('^'):
326 return name
326 return name
327 return '.*' + name
327 return '.*' + name
328 return head + globre(name, '', tail)
328 return head + globre(name, '', tail)
329
329
330 def matchfn(pats, tail):
330 def matchfn(pats, tail):
331 """build a matching function from a set of patterns"""
331 """build a matching function from a set of patterns"""
332 if not pats:
332 if not pats:
333 return
333 return
334 matches = []
334 matches = []
335 for k, p in pats:
335 for k, p in pats:
336 try:
336 try:
337 pat = '(?:%s)' % regex(k, p, tail)
337 pat = '(?:%s)' % regex(k, p, tail)
338 matches.append(re.compile(pat).match)
338 matches.append(re.compile(pat).match)
339 except re.error:
339 except re.error:
340 if src: raise Abort("%s: invalid pattern (%s): %s" % (src, k, p))
340 if src: raise Abort("%s: invalid pattern (%s): %s" % (src, k, p))
341 else: raise Abort("invalid pattern (%s): %s" % (k, p))
341 else: raise Abort("invalid pattern (%s): %s" % (k, p))
342
342
343 def buildfn(text):
343 def buildfn(text):
344 for m in matches:
344 for m in matches:
345 r = m(text)
345 r = m(text)
346 if r:
346 if r:
347 return r
347 return r
348
348
349 return buildfn
349 return buildfn
350
350
351 def globprefix(pat):
351 def globprefix(pat):
352 '''return the non-glob prefix of a path, e.g. foo/* -> foo'''
352 '''return the non-glob prefix of a path, e.g. foo/* -> foo'''
353 root = []
353 root = []
354 for p in pat.split(os.sep):
354 for p in pat.split(os.sep):
355 if contains_glob(p): break
355 if contains_glob(p): break
356 root.append(p)
356 root.append(p)
357 return '/'.join(root)
357 return '/'.join(root)
358
358
359 pats = []
359 pats = []
360 files = []
360 files = []
361 roots = []
361 roots = []
362 for kind, name in [patkind(p, dflt_pat) for p in names]:
362 for kind, name in [patkind(p, dflt_pat) for p in names]:
363 if kind in ('glob', 'relpath'):
363 if kind in ('glob', 'relpath'):
364 name = canonpath(canonroot, cwd, name)
364 name = canonpath(canonroot, cwd, name)
365 if name == '':
365 if name == '':
366 kind, name = 'glob', '**'
366 kind, name = 'glob', '**'
367 if kind in ('glob', 'path', 're'):
367 if kind in ('glob', 'path', 're'):
368 pats.append((kind, name))
368 pats.append((kind, name))
369 if kind == 'glob':
369 if kind == 'glob':
370 root = globprefix(name)
370 root = globprefix(name)
371 if root: roots.append(root)
371 if root: roots.append(root)
372 elif kind == 'relpath':
372 elif kind == 'relpath':
373 files.append((kind, name))
373 files.append((kind, name))
374 roots.append(name)
374 roots.append(name)
375
375
376 patmatch = matchfn(pats, '$') or always
376 patmatch = matchfn(pats, '$') or always
377 filematch = matchfn(files, '(?:/|$)') or always
377 filematch = matchfn(files, '(?:/|$)') or always
378 incmatch = always
378 incmatch = always
379 if inc:
379 if inc:
380 inckinds = [patkind(canonpath(canonroot, cwd, i)) for i in inc]
380 inckinds = [patkind(canonpath(canonroot, cwd, i)) for i in inc]
381 incmatch = matchfn(inckinds, '(?:/|$)')
381 incmatch = matchfn(inckinds, '(?:/|$)')
382 excmatch = lambda fn: False
382 excmatch = lambda fn: False
383 if exc:
383 if exc:
384 exckinds = [patkind(canonpath(canonroot, cwd, x)) for x in exc]
384 exckinds = [patkind(canonpath(canonroot, cwd, x)) for x in exc]
385 excmatch = matchfn(exckinds, '(?:/|$)')
385 excmatch = matchfn(exckinds, '(?:/|$)')
386
386
387 return (roots,
387 return (roots,
388 lambda fn: (incmatch(fn) and not excmatch(fn) and
388 lambda fn: (incmatch(fn) and not excmatch(fn) and
389 (fn.endswith('/') or
389 (fn.endswith('/') or
390 (not pats and not files) or
390 (not pats and not files) or
391 (pats and patmatch(fn)) or
391 (pats and patmatch(fn)) or
392 (files and filematch(fn)))),
392 (files and filematch(fn)))),
393 (inc or exc or (pats and pats != [('glob', '**')])) and True)
393 (inc or exc or (pats and pats != [('glob', '**')])) and True)
394
394
395 def system(cmd, environ={}, cwd=None, onerr=None, errprefix=None):
395 def system(cmd, environ={}, cwd=None, onerr=None, errprefix=None):
396 '''enhanced shell command execution.
396 '''enhanced shell command execution.
397 run with environment maybe modified, maybe in different dir.
397 run with environment maybe modified, maybe in different dir.
398
398
399 if command fails and onerr is None, return status. if ui object,
399 if command fails and onerr is None, return status. if ui object,
400 print error message and return status, else raise onerr object as
400 print error message and return status, else raise onerr object as
401 exception.'''
401 exception.'''
402 def py2shell(val):
402 def py2shell(val):
403 'convert python object into string that is useful to shell'
403 'convert python object into string that is useful to shell'
404 if val in (None, False):
404 if val in (None, False):
405 return '0'
405 return '0'
406 if val == True:
406 if val == True:
407 return '1'
407 return '1'
408 return str(val)
408 return str(val)
409 oldenv = {}
409 oldenv = {}
410 for k in environ:
410 for k in environ:
411 oldenv[k] = os.environ.get(k)
411 oldenv[k] = os.environ.get(k)
412 if cwd is not None:
412 if cwd is not None:
413 oldcwd = os.getcwd()
413 oldcwd = os.getcwd()
414 try:
414 try:
415 for k, v in environ.iteritems():
415 for k, v in environ.iteritems():
416 os.environ[k] = py2shell(v)
416 os.environ[k] = py2shell(v)
417 if cwd is not None and oldcwd != cwd:
417 if cwd is not None and oldcwd != cwd:
418 os.chdir(cwd)
418 os.chdir(cwd)
419 rc = os.system(cmd)
419 rc = os.system(cmd)
420 if rc and onerr:
420 if rc and onerr:
421 errmsg = '%s %s' % (os.path.basename(cmd.split(None, 1)[0]),
421 errmsg = '%s %s' % (os.path.basename(cmd.split(None, 1)[0]),
422 explain_exit(rc)[0])
422 explain_exit(rc)[0])
423 if errprefix:
423 if errprefix:
424 errmsg = '%s: %s' % (errprefix, errmsg)
424 errmsg = '%s: %s' % (errprefix, errmsg)
425 try:
425 try:
426 onerr.warn(errmsg + '\n')
426 onerr.warn(errmsg + '\n')
427 except AttributeError:
427 except AttributeError:
428 raise onerr(errmsg)
428 raise onerr(errmsg)
429 return rc
429 return rc
430 finally:
430 finally:
431 for k, v in oldenv.iteritems():
431 for k, v in oldenv.iteritems():
432 if v is None:
432 if v is None:
433 del os.environ[k]
433 del os.environ[k]
434 else:
434 else:
435 os.environ[k] = v
435 os.environ[k] = v
436 if cwd is not None and oldcwd != cwd:
436 if cwd is not None and oldcwd != cwd:
437 os.chdir(oldcwd)
437 os.chdir(oldcwd)
438
438
439 def rename(src, dst):
439 def rename(src, dst):
440 """forcibly rename a file"""
440 """forcibly rename a file"""
441 try:
441 try:
442 os.rename(src, dst)
442 os.rename(src, dst)
443 except OSError, err:
443 except OSError, err:
444 # on windows, rename to existing file is not allowed, so we
444 # on windows, rename to existing file is not allowed, so we
445 # must delete destination first. but if file is open, unlink
445 # must delete destination first. but if file is open, unlink
446 # schedules it for delete but does not delete it. rename
446 # schedules it for delete but does not delete it. rename
447 # happens immediately even for open files, so we create
447 # happens immediately even for open files, so we create
448 # temporary file, delete it, rename destination to that name,
448 # temporary file, delete it, rename destination to that name,
449 # then delete that. then rename is safe to do.
449 # then delete that. then rename is safe to do.
450 fd, temp = tempfile.mkstemp(dir=os.path.dirname(dst) or '.')
450 fd, temp = tempfile.mkstemp(dir=os.path.dirname(dst) or '.')
451 os.close(fd)
451 os.close(fd)
452 os.unlink(temp)
452 os.unlink(temp)
453 os.rename(dst, temp)
453 os.rename(dst, temp)
454 os.unlink(temp)
454 os.unlink(temp)
455 os.rename(src, dst)
455 os.rename(src, dst)
456
456
457 def unlink(f):
457 def unlink(f):
458 """unlink and remove the directory if it is empty"""
458 """unlink and remove the directory if it is empty"""
459 os.unlink(f)
459 os.unlink(f)
460 # try removing directories that might now be empty
460 # try removing directories that might now be empty
461 try:
461 try:
462 os.removedirs(os.path.dirname(f))
462 os.removedirs(os.path.dirname(f))
463 except OSError:
463 except OSError:
464 pass
464 pass
465
465
466 def copyfile(src, dest):
466 def copyfile(src, dest):
467 "copy a file, preserving mode"
467 "copy a file, preserving mode"
468 try:
468 try:
469 shutil.copyfile(src, dest)
469 shutil.copyfile(src, dest)
470 shutil.copymode(src, dest)
470 shutil.copymode(src, dest)
471 except shutil.Error, inst:
471 except shutil.Error, inst:
472 raise util.Abort(str(inst))
472 raise util.Abort(str(inst))
473
473
474 def copyfiles(src, dst, hardlink=None):
474 def copyfiles(src, dst, hardlink=None):
475 """Copy a directory tree using hardlinks if possible"""
475 """Copy a directory tree using hardlinks if possible"""
476
476
477 if hardlink is None:
477 if hardlink is None:
478 hardlink = (os.stat(src).st_dev ==
478 hardlink = (os.stat(src).st_dev ==
479 os.stat(os.path.dirname(dst)).st_dev)
479 os.stat(os.path.dirname(dst)).st_dev)
480
480
481 if os.path.isdir(src):
481 if os.path.isdir(src):
482 os.mkdir(dst)
482 os.mkdir(dst)
483 for name in os.listdir(src):
483 for name in os.listdir(src):
484 srcname = os.path.join(src, name)
484 srcname = os.path.join(src, name)
485 dstname = os.path.join(dst, name)
485 dstname = os.path.join(dst, name)
486 copyfiles(srcname, dstname, hardlink)
486 copyfiles(srcname, dstname, hardlink)
487 else:
487 else:
488 if hardlink:
488 if hardlink:
489 try:
489 try:
490 os_link(src, dst)
490 os_link(src, dst)
491 except (IOError, OSError):
491 except (IOError, OSError):
492 hardlink = False
492 hardlink = False
493 shutil.copy(src, dst)
493 shutil.copy(src, dst)
494 else:
494 else:
495 shutil.copy(src, dst)
495 shutil.copy(src, dst)
496
496
497 def audit_path(path):
497 def audit_path(path):
498 """Abort if path contains dangerous components"""
498 """Abort if path contains dangerous components"""
499 parts = os.path.normcase(path).split(os.sep)
499 parts = os.path.normcase(path).split(os.sep)
500 if (os.path.splitdrive(path)[0] or parts[0] in ('.hg', '')
500 if (os.path.splitdrive(path)[0] or parts[0] in ('.hg', '')
501 or os.pardir in parts):
501 or os.pardir in parts):
502 raise Abort(_("path contains illegal component: %s\n") % path)
502 raise Abort(_("path contains illegal component: %s\n") % path)
503
503
504 def _makelock_file(info, pathname):
504 def _makelock_file(info, pathname):
505 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
505 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
506 os.write(ld, info)
506 os.write(ld, info)
507 os.close(ld)
507 os.close(ld)
508
508
509 def _readlock_file(pathname):
509 def _readlock_file(pathname):
510 return posixfile(pathname).read()
510 return posixfile(pathname).read()
511
511
512 def nlinks(pathname):
512 def nlinks(pathname):
513 """Return number of hardlinks for the given file."""
513 """Return number of hardlinks for the given file."""
514 return os.lstat(pathname).st_nlink
514 return os.lstat(pathname).st_nlink
515
515
516 if hasattr(os, 'link'):
516 if hasattr(os, 'link'):
517 os_link = os.link
517 os_link = os.link
518 else:
518 else:
519 def os_link(src, dst):
519 def os_link(src, dst):
520 raise OSError(0, _("Hardlinks not supported"))
520 raise OSError(0, _("Hardlinks not supported"))
521
521
522 def fstat(fp):
522 def fstat(fp):
523 '''stat file object that may not have fileno method.'''
523 '''stat file object that may not have fileno method.'''
524 try:
524 try:
525 return os.fstat(fp.fileno())
525 return os.fstat(fp.fileno())
526 except AttributeError:
526 except AttributeError:
527 return os.stat(fp.name)
527 return os.stat(fp.name)
528
528
529 posixfile = file
529 posixfile = file
530
530
531 def is_win_9x():
531 def is_win_9x():
532 '''return true if run on windows 95, 98 or me.'''
532 '''return true if run on windows 95, 98 or me.'''
533 try:
533 try:
534 return sys.getwindowsversion()[3] == 1
534 return sys.getwindowsversion()[3] == 1
535 except AttributeError:
535 except AttributeError:
536 return os.name == 'nt' and 'command' in os.environ.get('comspec', '')
536 return os.name == 'nt' and 'command' in os.environ.get('comspec', '')
537
537
538 getuser_fallback = None
539
540 def getuser():
541 '''return name of current user'''
542 try:
543 return getpass.getuser()
544 except ImportError:
545 # import of pwd will fail on windows - try fallback
546 if getuser_fallback:
547 return getuser_fallback()
548 # raised if win32api not available
549 raise Abort(_('user name not available - set USERNAME '
550 'environment variable'))
551
538 def username(uid=None):
552 def username(uid=None):
539 """Return the name of the user with the given uid.
553 """Return the name of the user with the given uid.
540
554
541 If uid is None, return the name of the current user."""
555 If uid is None, return the name of the current user."""
542 try:
556 try:
543 import pwd
557 import pwd
544 if uid is None:
558 if uid is None:
545 uid = os.getuid()
559 uid = os.getuid()
546 try:
560 try:
547 return pwd.getpwuid(uid)[0]
561 return pwd.getpwuid(uid)[0]
548 except KeyError:
562 except KeyError:
549 return str(uid)
563 return str(uid)
550 except ImportError:
564 except ImportError:
551 return None
565 return None
552
566
553 def groupname(gid=None):
567 def groupname(gid=None):
554 """Return the name of the group with the given gid.
568 """Return the name of the group with the given gid.
555
569
556 If gid is None, return the name of the current group."""
570 If gid is None, return the name of the current group."""
557 try:
571 try:
558 import grp
572 import grp
559 if gid is None:
573 if gid is None:
560 gid = os.getgid()
574 gid = os.getgid()
561 try:
575 try:
562 return grp.getgrgid(gid)[0]
576 return grp.getgrgid(gid)[0]
563 except KeyError:
577 except KeyError:
564 return str(gid)
578 return str(gid)
565 except ImportError:
579 except ImportError:
566 return None
580 return None
567
581
568 # Platform specific variants
582 # Platform specific variants
569 if os.name == 'nt':
583 if os.name == 'nt':
570 demandload(globals(), "msvcrt")
584 demandload(globals(), "msvcrt")
571 nulldev = 'NUL:'
585 nulldev = 'NUL:'
572
586
573 class winstdout:
587 class winstdout:
574 '''stdout on windows misbehaves if sent through a pipe'''
588 '''stdout on windows misbehaves if sent through a pipe'''
575
589
576 def __init__(self, fp):
590 def __init__(self, fp):
577 self.fp = fp
591 self.fp = fp
578
592
579 def __getattr__(self, key):
593 def __getattr__(self, key):
580 return getattr(self.fp, key)
594 return getattr(self.fp, key)
581
595
582 def close(self):
596 def close(self):
583 try:
597 try:
584 self.fp.close()
598 self.fp.close()
585 except: pass
599 except: pass
586
600
587 def write(self, s):
601 def write(self, s):
588 try:
602 try:
589 return self.fp.write(s)
603 return self.fp.write(s)
590 except IOError, inst:
604 except IOError, inst:
591 if inst.errno != 0: raise
605 if inst.errno != 0: raise
592 self.close()
606 self.close()
593 raise IOError(errno.EPIPE, 'Broken pipe')
607 raise IOError(errno.EPIPE, 'Broken pipe')
594
608
595 sys.stdout = winstdout(sys.stdout)
609 sys.stdout = winstdout(sys.stdout)
596
610
597 def system_rcpath():
611 def system_rcpath():
598 try:
612 try:
599 return system_rcpath_win32()
613 return system_rcpath_win32()
600 except:
614 except:
601 return [r'c:\mercurial\mercurial.ini']
615 return [r'c:\mercurial\mercurial.ini']
602
616
603 def os_rcpath():
617 def os_rcpath():
604 '''return default os-specific hgrc search path'''
618 '''return default os-specific hgrc search path'''
605 path = system_rcpath()
619 path = system_rcpath()
606 path.append(user_rcpath())
620 path.append(user_rcpath())
607 userprofile = os.environ.get('USERPROFILE')
621 userprofile = os.environ.get('USERPROFILE')
608 if userprofile:
622 if userprofile:
609 path.append(os.path.join(userprofile, 'mercurial.ini'))
623 path.append(os.path.join(userprofile, 'mercurial.ini'))
610 return path
624 return path
611
625
612 def user_rcpath():
626 def user_rcpath():
613 '''return os-specific hgrc search path to the user dir'''
627 '''return os-specific hgrc search path to the user dir'''
614 return os.path.join(os.path.expanduser('~'), 'mercurial.ini')
628 return os.path.join(os.path.expanduser('~'), 'mercurial.ini')
615
629
616 def parse_patch_output(output_line):
630 def parse_patch_output(output_line):
617 """parses the output produced by patch and returns the file name"""
631 """parses the output produced by patch and returns the file name"""
618 pf = output_line[14:]
632 pf = output_line[14:]
619 if pf[0] == '`':
633 if pf[0] == '`':
620 pf = pf[1:-1] # Remove the quotes
634 pf = pf[1:-1] # Remove the quotes
621 return pf
635 return pf
622
636
623 def testpid(pid):
637 def testpid(pid):
624 '''return False if pid dead, True if running or not known'''
638 '''return False if pid dead, True if running or not known'''
625 return True
639 return True
626
640
627 def is_exec(f, last):
641 def is_exec(f, last):
628 return last
642 return last
629
643
630 def set_exec(f, mode):
644 def set_exec(f, mode):
631 pass
645 pass
632
646
633 def set_binary(fd):
647 def set_binary(fd):
634 msvcrt.setmode(fd.fileno(), os.O_BINARY)
648 msvcrt.setmode(fd.fileno(), os.O_BINARY)
635
649
636 def pconvert(path):
650 def pconvert(path):
637 return path.replace("\\", "/")
651 return path.replace("\\", "/")
638
652
639 def localpath(path):
653 def localpath(path):
640 return path.replace('/', '\\')
654 return path.replace('/', '\\')
641
655
642 def normpath(path):
656 def normpath(path):
643 return pconvert(os.path.normpath(path))
657 return pconvert(os.path.normpath(path))
644
658
645 makelock = _makelock_file
659 makelock = _makelock_file
646 readlock = _readlock_file
660 readlock = _readlock_file
647
661
648 def samestat(s1, s2):
662 def samestat(s1, s2):
649 return False
663 return False
650
664
651 def shellquote(s):
665 def shellquote(s):
652 return '"%s"' % s.replace('"', '\\"')
666 return '"%s"' % s.replace('"', '\\"')
653
667
654 def explain_exit(code):
668 def explain_exit(code):
655 return _("exited with status %d") % code, code
669 return _("exited with status %d") % code, code
656
670
657 # if you change this stub into a real check, please try to implement the
671 # if you change this stub into a real check, please try to implement the
658 # username and groupname functions above, too.
672 # username and groupname functions above, too.
659 def isowner(fp, st=None):
673 def isowner(fp, st=None):
660 return True
674 return True
661
675
662 try:
676 try:
663 # override functions with win32 versions if possible
677 # override functions with win32 versions if possible
664 from util_win32 import *
678 from util_win32 import *
665 if not is_win_9x():
679 if not is_win_9x():
666 posixfile = posixfile_nt
680 posixfile = posixfile_nt
667 except ImportError:
681 except ImportError:
668 pass
682 pass
669
683
670 else:
684 else:
671 nulldev = '/dev/null'
685 nulldev = '/dev/null'
672
686
673 def rcfiles(path):
687 def rcfiles(path):
674 rcs = [os.path.join(path, 'hgrc')]
688 rcs = [os.path.join(path, 'hgrc')]
675 rcdir = os.path.join(path, 'hgrc.d')
689 rcdir = os.path.join(path, 'hgrc.d')
676 try:
690 try:
677 rcs.extend([os.path.join(rcdir, f) for f in os.listdir(rcdir)
691 rcs.extend([os.path.join(rcdir, f) for f in os.listdir(rcdir)
678 if f.endswith(".rc")])
692 if f.endswith(".rc")])
679 except OSError:
693 except OSError:
680 pass
694 pass
681 return rcs
695 return rcs
682
696
683 def os_rcpath():
697 def os_rcpath():
684 '''return default os-specific hgrc search path'''
698 '''return default os-specific hgrc search path'''
685 path = []
699 path = []
686 # old mod_python does not set sys.argv
700 # old mod_python does not set sys.argv
687 if len(getattr(sys, 'argv', [])) > 0:
701 if len(getattr(sys, 'argv', [])) > 0:
688 path.extend(rcfiles(os.path.dirname(sys.argv[0]) +
702 path.extend(rcfiles(os.path.dirname(sys.argv[0]) +
689 '/../etc/mercurial'))
703 '/../etc/mercurial'))
690 path.extend(rcfiles('/etc/mercurial'))
704 path.extend(rcfiles('/etc/mercurial'))
691 path.append(os.path.expanduser('~/.hgrc'))
705 path.append(os.path.expanduser('~/.hgrc'))
692 path = [os.path.normpath(f) for f in path]
706 path = [os.path.normpath(f) for f in path]
693 return path
707 return path
694
708
695 def parse_patch_output(output_line):
709 def parse_patch_output(output_line):
696 """parses the output produced by patch and returns the file name"""
710 """parses the output produced by patch and returns the file name"""
697 pf = output_line[14:]
711 pf = output_line[14:]
698 if pf.startswith("'") and pf.endswith("'") and " " in pf:
712 if pf.startswith("'") and pf.endswith("'") and " " in pf:
699 pf = pf[1:-1] # Remove the quotes
713 pf = pf[1:-1] # Remove the quotes
700 return pf
714 return pf
701
715
702 def is_exec(f, last):
716 def is_exec(f, last):
703 """check whether a file is executable"""
717 """check whether a file is executable"""
704 return (os.lstat(f).st_mode & 0100 != 0)
718 return (os.lstat(f).st_mode & 0100 != 0)
705
719
706 def set_exec(f, mode):
720 def set_exec(f, mode):
707 s = os.lstat(f).st_mode
721 s = os.lstat(f).st_mode
708 if (s & 0100 != 0) == mode:
722 if (s & 0100 != 0) == mode:
709 return
723 return
710 if mode:
724 if mode:
711 # Turn on +x for every +r bit when making a file executable
725 # Turn on +x for every +r bit when making a file executable
712 # and obey umask.
726 # and obey umask.
713 umask = os.umask(0)
727 umask = os.umask(0)
714 os.umask(umask)
728 os.umask(umask)
715 os.chmod(f, s | (s & 0444) >> 2 & ~umask)
729 os.chmod(f, s | (s & 0444) >> 2 & ~umask)
716 else:
730 else:
717 os.chmod(f, s & 0666)
731 os.chmod(f, s & 0666)
718
732
719 def set_binary(fd):
733 def set_binary(fd):
720 pass
734 pass
721
735
722 def pconvert(path):
736 def pconvert(path):
723 return path
737 return path
724
738
725 def localpath(path):
739 def localpath(path):
726 return path
740 return path
727
741
728 normpath = os.path.normpath
742 normpath = os.path.normpath
729 samestat = os.path.samestat
743 samestat = os.path.samestat
730
744
731 def makelock(info, pathname):
745 def makelock(info, pathname):
732 try:
746 try:
733 os.symlink(info, pathname)
747 os.symlink(info, pathname)
734 except OSError, why:
748 except OSError, why:
735 if why.errno == errno.EEXIST:
749 if why.errno == errno.EEXIST:
736 raise
750 raise
737 else:
751 else:
738 _makelock_file(info, pathname)
752 _makelock_file(info, pathname)
739
753
740 def readlock(pathname):
754 def readlock(pathname):
741 try:
755 try:
742 return os.readlink(pathname)
756 return os.readlink(pathname)
743 except OSError, why:
757 except OSError, why:
744 if why.errno == errno.EINVAL:
758 if why.errno == errno.EINVAL:
745 return _readlock_file(pathname)
759 return _readlock_file(pathname)
746 else:
760 else:
747 raise
761 raise
748
762
749 def shellquote(s):
763 def shellquote(s):
750 return "'%s'" % s.replace("'", "'\\''")
764 return "'%s'" % s.replace("'", "'\\''")
751
765
752 def testpid(pid):
766 def testpid(pid):
753 '''return False if pid dead, True if running or not sure'''
767 '''return False if pid dead, True if running or not sure'''
754 try:
768 try:
755 os.kill(pid, 0)
769 os.kill(pid, 0)
756 return True
770 return True
757 except OSError, inst:
771 except OSError, inst:
758 return inst.errno != errno.ESRCH
772 return inst.errno != errno.ESRCH
759
773
760 def explain_exit(code):
774 def explain_exit(code):
761 """return a 2-tuple (desc, code) describing a process's status"""
775 """return a 2-tuple (desc, code) describing a process's status"""
762 if os.WIFEXITED(code):
776 if os.WIFEXITED(code):
763 val = os.WEXITSTATUS(code)
777 val = os.WEXITSTATUS(code)
764 return _("exited with status %d") % val, val
778 return _("exited with status %d") % val, val
765 elif os.WIFSIGNALED(code):
779 elif os.WIFSIGNALED(code):
766 val = os.WTERMSIG(code)
780 val = os.WTERMSIG(code)
767 return _("killed by signal %d") % val, val
781 return _("killed by signal %d") % val, val
768 elif os.WIFSTOPPED(code):
782 elif os.WIFSTOPPED(code):
769 val = os.WSTOPSIG(code)
783 val = os.WSTOPSIG(code)
770 return _("stopped by signal %d") % val, val
784 return _("stopped by signal %d") % val, val
771 raise ValueError(_("invalid exit code"))
785 raise ValueError(_("invalid exit code"))
772
786
773 def isowner(fp, st=None):
787 def isowner(fp, st=None):
774 """Return True if the file object f belongs to the current user.
788 """Return True if the file object f belongs to the current user.
775
789
776 The return value of a util.fstat(f) may be passed as the st argument.
790 The return value of a util.fstat(f) may be passed as the st argument.
777 """
791 """
778 if st is None:
792 if st is None:
779 st = fstat(f)
793 st = fstat(f)
780 return st.st_uid == os.getuid()
794 return st.st_uid == os.getuid()
781
795
782
796
783 def opener(base, audit=True):
797 def opener(base, audit=True):
784 """
798 """
785 return a function that opens files relative to base
799 return a function that opens files relative to base
786
800
787 this function is used to hide the details of COW semantics and
801 this function is used to hide the details of COW semantics and
788 remote file access from higher level code.
802 remote file access from higher level code.
789 """
803 """
790 p = base
804 p = base
791 audit_p = audit
805 audit_p = audit
792
806
793 def mktempcopy(name):
807 def mktempcopy(name):
794 d, fn = os.path.split(name)
808 d, fn = os.path.split(name)
795 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
809 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
796 os.close(fd)
810 os.close(fd)
797 ofp = posixfile(temp, "wb")
811 ofp = posixfile(temp, "wb")
798 try:
812 try:
799 try:
813 try:
800 ifp = posixfile(name, "rb")
814 ifp = posixfile(name, "rb")
801 except IOError, inst:
815 except IOError, inst:
802 if not getattr(inst, 'filename', None):
816 if not getattr(inst, 'filename', None):
803 inst.filename = name
817 inst.filename = name
804 raise
818 raise
805 for chunk in filechunkiter(ifp):
819 for chunk in filechunkiter(ifp):
806 ofp.write(chunk)
820 ofp.write(chunk)
807 ifp.close()
821 ifp.close()
808 ofp.close()
822 ofp.close()
809 except:
823 except:
810 try: os.unlink(temp)
824 try: os.unlink(temp)
811 except: pass
825 except: pass
812 raise
826 raise
813 st = os.lstat(name)
827 st = os.lstat(name)
814 os.chmod(temp, st.st_mode)
828 os.chmod(temp, st.st_mode)
815 return temp
829 return temp
816
830
817 class atomictempfile(posixfile):
831 class atomictempfile(posixfile):
818 """the file will only be copied when rename is called"""
832 """the file will only be copied when rename is called"""
819 def __init__(self, name, mode):
833 def __init__(self, name, mode):
820 self.__name = name
834 self.__name = name
821 self.temp = mktempcopy(name)
835 self.temp = mktempcopy(name)
822 posixfile.__init__(self, self.temp, mode)
836 posixfile.__init__(self, self.temp, mode)
823 def rename(self):
837 def rename(self):
824 if not self.closed:
838 if not self.closed:
825 posixfile.close(self)
839 posixfile.close(self)
826 rename(self.temp, localpath(self.__name))
840 rename(self.temp, localpath(self.__name))
827 def __del__(self):
841 def __del__(self):
828 if not self.closed:
842 if not self.closed:
829 try:
843 try:
830 os.unlink(self.temp)
844 os.unlink(self.temp)
831 except: pass
845 except: pass
832 posixfile.close(self)
846 posixfile.close(self)
833
847
834 class atomicfile(atomictempfile):
848 class atomicfile(atomictempfile):
835 """the file will only be copied on close"""
849 """the file will only be copied on close"""
836 def __init__(self, name, mode):
850 def __init__(self, name, mode):
837 atomictempfile.__init__(self, name, mode)
851 atomictempfile.__init__(self, name, mode)
838 def close(self):
852 def close(self):
839 self.rename()
853 self.rename()
840 def __del__(self):
854 def __del__(self):
841 self.rename()
855 self.rename()
842
856
843 def o(path, mode="r", text=False, atomic=False, atomictemp=False):
857 def o(path, mode="r", text=False, atomic=False, atomictemp=False):
844 if audit_p:
858 if audit_p:
845 audit_path(path)
859 audit_path(path)
846 f = os.path.join(p, path)
860 f = os.path.join(p, path)
847
861
848 if not text:
862 if not text:
849 mode += "b" # for that other OS
863 mode += "b" # for that other OS
850
864
851 if mode[0] != "r":
865 if mode[0] != "r":
852 try:
866 try:
853 nlink = nlinks(f)
867 nlink = nlinks(f)
854 except OSError:
868 except OSError:
855 d = os.path.dirname(f)
869 d = os.path.dirname(f)
856 if not os.path.isdir(d):
870 if not os.path.isdir(d):
857 os.makedirs(d)
871 os.makedirs(d)
858 else:
872 else:
859 if atomic:
873 if atomic:
860 return atomicfile(f, mode)
874 return atomicfile(f, mode)
861 elif atomictemp:
875 elif atomictemp:
862 return atomictempfile(f, mode)
876 return atomictempfile(f, mode)
863 if nlink > 1:
877 if nlink > 1:
864 rename(mktempcopy(f), f)
878 rename(mktempcopy(f), f)
865 return posixfile(f, mode)
879 return posixfile(f, mode)
866
880
867 return o
881 return o
868
882
869 class chunkbuffer(object):
883 class chunkbuffer(object):
870 """Allow arbitrary sized chunks of data to be efficiently read from an
884 """Allow arbitrary sized chunks of data to be efficiently read from an
871 iterator over chunks of arbitrary size."""
885 iterator over chunks of arbitrary size."""
872
886
873 def __init__(self, in_iter, targetsize = 2**16):
887 def __init__(self, in_iter, targetsize = 2**16):
874 """in_iter is the iterator that's iterating over the input chunks.
888 """in_iter is the iterator that's iterating over the input chunks.
875 targetsize is how big a buffer to try to maintain."""
889 targetsize is how big a buffer to try to maintain."""
876 self.in_iter = iter(in_iter)
890 self.in_iter = iter(in_iter)
877 self.buf = ''
891 self.buf = ''
878 self.targetsize = int(targetsize)
892 self.targetsize = int(targetsize)
879 if self.targetsize <= 0:
893 if self.targetsize <= 0:
880 raise ValueError(_("targetsize must be greater than 0, was %d") %
894 raise ValueError(_("targetsize must be greater than 0, was %d") %
881 targetsize)
895 targetsize)
882 self.iterempty = False
896 self.iterempty = False
883
897
884 def fillbuf(self):
898 def fillbuf(self):
885 """Ignore target size; read every chunk from iterator until empty."""
899 """Ignore target size; read every chunk from iterator until empty."""
886 if not self.iterempty:
900 if not self.iterempty:
887 collector = cStringIO.StringIO()
901 collector = cStringIO.StringIO()
888 collector.write(self.buf)
902 collector.write(self.buf)
889 for ch in self.in_iter:
903 for ch in self.in_iter:
890 collector.write(ch)
904 collector.write(ch)
891 self.buf = collector.getvalue()
905 self.buf = collector.getvalue()
892 self.iterempty = True
906 self.iterempty = True
893
907
894 def read(self, l):
908 def read(self, l):
895 """Read L bytes of data from the iterator of chunks of data.
909 """Read L bytes of data from the iterator of chunks of data.
896 Returns less than L bytes if the iterator runs dry."""
910 Returns less than L bytes if the iterator runs dry."""
897 if l > len(self.buf) and not self.iterempty:
911 if l > len(self.buf) and not self.iterempty:
898 # Clamp to a multiple of self.targetsize
912 # Clamp to a multiple of self.targetsize
899 targetsize = self.targetsize * ((l // self.targetsize) + 1)
913 targetsize = self.targetsize * ((l // self.targetsize) + 1)
900 collector = cStringIO.StringIO()
914 collector = cStringIO.StringIO()
901 collector.write(self.buf)
915 collector.write(self.buf)
902 collected = len(self.buf)
916 collected = len(self.buf)
903 for chunk in self.in_iter:
917 for chunk in self.in_iter:
904 collector.write(chunk)
918 collector.write(chunk)
905 collected += len(chunk)
919 collected += len(chunk)
906 if collected >= targetsize:
920 if collected >= targetsize:
907 break
921 break
908 if collected < targetsize:
922 if collected < targetsize:
909 self.iterempty = True
923 self.iterempty = True
910 self.buf = collector.getvalue()
924 self.buf = collector.getvalue()
911 s, self.buf = self.buf[:l], buffer(self.buf, l)
925 s, self.buf = self.buf[:l], buffer(self.buf, l)
912 return s
926 return s
913
927
914 def filechunkiter(f, size=65536, limit=None):
928 def filechunkiter(f, size=65536, limit=None):
915 """Create a generator that produces the data in the file size
929 """Create a generator that produces the data in the file size
916 (default 65536) bytes at a time, up to optional limit (default is
930 (default 65536) bytes at a time, up to optional limit (default is
917 to read all data). Chunks may be less than size bytes if the
931 to read all data). Chunks may be less than size bytes if the
918 chunk is the last chunk in the file, or the file is a socket or
932 chunk is the last chunk in the file, or the file is a socket or
919 some other type of file that sometimes reads less data than is
933 some other type of file that sometimes reads less data than is
920 requested."""
934 requested."""
921 assert size >= 0
935 assert size >= 0
922 assert limit is None or limit >= 0
936 assert limit is None or limit >= 0
923 while True:
937 while True:
924 if limit is None: nbytes = size
938 if limit is None: nbytes = size
925 else: nbytes = min(limit, size)
939 else: nbytes = min(limit, size)
926 s = nbytes and f.read(nbytes)
940 s = nbytes and f.read(nbytes)
927 if not s: break
941 if not s: break
928 if limit: limit -= len(s)
942 if limit: limit -= len(s)
929 yield s
943 yield s
930
944
931 def makedate():
945 def makedate():
932 lt = time.localtime()
946 lt = time.localtime()
933 if lt[8] == 1 and time.daylight:
947 if lt[8] == 1 and time.daylight:
934 tz = time.altzone
948 tz = time.altzone
935 else:
949 else:
936 tz = time.timezone
950 tz = time.timezone
937 return time.mktime(lt), tz
951 return time.mktime(lt), tz
938
952
939 def datestr(date=None, format='%a %b %d %H:%M:%S %Y', timezone=True):
953 def datestr(date=None, format='%a %b %d %H:%M:%S %Y', timezone=True):
940 """represent a (unixtime, offset) tuple as a localized time.
954 """represent a (unixtime, offset) tuple as a localized time.
941 unixtime is seconds since the epoch, and offset is the time zone's
955 unixtime is seconds since the epoch, and offset is the time zone's
942 number of seconds away from UTC. if timezone is false, do not
956 number of seconds away from UTC. if timezone is false, do not
943 append time zone to string."""
957 append time zone to string."""
944 t, tz = date or makedate()
958 t, tz = date or makedate()
945 s = time.strftime(format, time.gmtime(float(t) - tz))
959 s = time.strftime(format, time.gmtime(float(t) - tz))
946 if timezone:
960 if timezone:
947 s += " %+03d%02d" % (-tz / 3600, ((-tz % 3600) / 60))
961 s += " %+03d%02d" % (-tz / 3600, ((-tz % 3600) / 60))
948 return s
962 return s
949
963
950 def strdate(string, format='%a %b %d %H:%M:%S %Y'):
964 def strdate(string, format='%a %b %d %H:%M:%S %Y'):
951 """parse a localized time string and return a (unixtime, offset) tuple.
965 """parse a localized time string and return a (unixtime, offset) tuple.
952 if the string cannot be parsed, ValueError is raised."""
966 if the string cannot be parsed, ValueError is raised."""
953 def hastimezone(string):
967 def hastimezone(string):
954 return (string[-4:].isdigit() and
968 return (string[-4:].isdigit() and
955 (string[-5] == '+' or string[-5] == '-') and
969 (string[-5] == '+' or string[-5] == '-') and
956 string[-6].isspace())
970 string[-6].isspace())
957
971
958 # NOTE: unixtime = localunixtime + offset
972 # NOTE: unixtime = localunixtime + offset
959 if hastimezone(string):
973 if hastimezone(string):
960 date, tz = string[:-6], string[-5:]
974 date, tz = string[:-6], string[-5:]
961 tz = int(tz)
975 tz = int(tz)
962 offset = - 3600 * (tz / 100) - 60 * (tz % 100)
976 offset = - 3600 * (tz / 100) - 60 * (tz % 100)
963 else:
977 else:
964 date, offset = string, None
978 date, offset = string, None
965 timetuple = time.strptime(date, format)
979 timetuple = time.strptime(date, format)
966 localunixtime = int(calendar.timegm(timetuple))
980 localunixtime = int(calendar.timegm(timetuple))
967 if offset is None:
981 if offset is None:
968 # local timezone
982 # local timezone
969 unixtime = int(time.mktime(timetuple))
983 unixtime = int(time.mktime(timetuple))
970 offset = unixtime - localunixtime
984 offset = unixtime - localunixtime
971 else:
985 else:
972 unixtime = localunixtime + offset
986 unixtime = localunixtime + offset
973 return unixtime, offset
987 return unixtime, offset
974
988
975 def parsedate(string, formats=None):
989 def parsedate(string, formats=None):
976 """parse a localized time string and return a (unixtime, offset) tuple.
990 """parse a localized time string and return a (unixtime, offset) tuple.
977 The date may be a "unixtime offset" string or in one of the specified
991 The date may be a "unixtime offset" string or in one of the specified
978 formats."""
992 formats."""
979 if not formats:
993 if not formats:
980 formats = defaultdateformats
994 formats = defaultdateformats
981 try:
995 try:
982 when, offset = map(int, string.split(' '))
996 when, offset = map(int, string.split(' '))
983 except ValueError:
997 except ValueError:
984 for format in formats:
998 for format in formats:
985 try:
999 try:
986 when, offset = strdate(string, format)
1000 when, offset = strdate(string, format)
987 except ValueError:
1001 except ValueError:
988 pass
1002 pass
989 else:
1003 else:
990 break
1004 break
991 else:
1005 else:
992 raise ValueError(_('invalid date: %r '
1006 raise ValueError(_('invalid date: %r '
993 'see hg(1) manual page for details')
1007 'see hg(1) manual page for details')
994 % string)
1008 % string)
995 # validate explicit (probably user-specified) date and
1009 # validate explicit (probably user-specified) date and
996 # time zone offset. values must fit in signed 32 bits for
1010 # time zone offset. values must fit in signed 32 bits for
997 # current 32-bit linux runtimes. timezones go from UTC-12
1011 # current 32-bit linux runtimes. timezones go from UTC-12
998 # to UTC+14
1012 # to UTC+14
999 if abs(when) > 0x7fffffff:
1013 if abs(when) > 0x7fffffff:
1000 raise ValueError(_('date exceeds 32 bits: %d') % when)
1014 raise ValueError(_('date exceeds 32 bits: %d') % when)
1001 if offset < -50400 or offset > 43200:
1015 if offset < -50400 or offset > 43200:
1002 raise ValueError(_('impossible time zone offset: %d') % offset)
1016 raise ValueError(_('impossible time zone offset: %d') % offset)
1003 return when, offset
1017 return when, offset
1004
1018
1005 def shortuser(user):
1019 def shortuser(user):
1006 """Return a short representation of a user name or email address."""
1020 """Return a short representation of a user name or email address."""
1007 f = user.find('@')
1021 f = user.find('@')
1008 if f >= 0:
1022 if f >= 0:
1009 user = user[:f]
1023 user = user[:f]
1010 f = user.find('<')
1024 f = user.find('<')
1011 if f >= 0:
1025 if f >= 0:
1012 user = user[f+1:]
1026 user = user[f+1:]
1013 f = user.find(' ')
1027 f = user.find(' ')
1014 if f >= 0:
1028 if f >= 0:
1015 user = user[:f]
1029 user = user[:f]
1016 f = user.find('.')
1030 f = user.find('.')
1017 if f >= 0:
1031 if f >= 0:
1018 user = user[:f]
1032 user = user[:f]
1019 return user
1033 return user
1020
1034
1021 def walkrepos(path):
1035 def walkrepos(path):
1022 '''yield every hg repository under path, recursively.'''
1036 '''yield every hg repository under path, recursively.'''
1023 def errhandler(err):
1037 def errhandler(err):
1024 if err.filename == path:
1038 if err.filename == path:
1025 raise err
1039 raise err
1026
1040
1027 for root, dirs, files in os.walk(path, onerror=errhandler):
1041 for root, dirs, files in os.walk(path, onerror=errhandler):
1028 for d in dirs:
1042 for d in dirs:
1029 if d == '.hg':
1043 if d == '.hg':
1030 yield root
1044 yield root
1031 dirs[:] = []
1045 dirs[:] = []
1032 break
1046 break
1033
1047
1034 _rcpath = None
1048 _rcpath = None
1035
1049
1036 def rcpath():
1050 def rcpath():
1037 '''return hgrc search path. if env var HGRCPATH is set, use it.
1051 '''return hgrc search path. if env var HGRCPATH is set, use it.
1038 for each item in path, if directory, use files ending in .rc,
1052 for each item in path, if directory, use files ending in .rc,
1039 else use item.
1053 else use item.
1040 make HGRCPATH empty to only look in .hg/hgrc of current repo.
1054 make HGRCPATH empty to only look in .hg/hgrc of current repo.
1041 if no HGRCPATH, use default os-specific path.'''
1055 if no HGRCPATH, use default os-specific path.'''
1042 global _rcpath
1056 global _rcpath
1043 if _rcpath is None:
1057 if _rcpath is None:
1044 if 'HGRCPATH' in os.environ:
1058 if 'HGRCPATH' in os.environ:
1045 _rcpath = []
1059 _rcpath = []
1046 for p in os.environ['HGRCPATH'].split(os.pathsep):
1060 for p in os.environ['HGRCPATH'].split(os.pathsep):
1047 if not p: continue
1061 if not p: continue
1048 if os.path.isdir(p):
1062 if os.path.isdir(p):
1049 for f in os.listdir(p):
1063 for f in os.listdir(p):
1050 if f.endswith('.rc'):
1064 if f.endswith('.rc'):
1051 _rcpath.append(os.path.join(p, f))
1065 _rcpath.append(os.path.join(p, f))
1052 else:
1066 else:
1053 _rcpath.append(p)
1067 _rcpath.append(p)
1054 else:
1068 else:
1055 _rcpath = os_rcpath()
1069 _rcpath = os_rcpath()
1056 return _rcpath
1070 return _rcpath
1057
1071
1058 def bytecount(nbytes):
1072 def bytecount(nbytes):
1059 '''return byte count formatted as readable string, with units'''
1073 '''return byte count formatted as readable string, with units'''
1060
1074
1061 units = (
1075 units = (
1062 (100, 1<<30, _('%.0f GB')),
1076 (100, 1<<30, _('%.0f GB')),
1063 (10, 1<<30, _('%.1f GB')),
1077 (10, 1<<30, _('%.1f GB')),
1064 (1, 1<<30, _('%.2f GB')),
1078 (1, 1<<30, _('%.2f GB')),
1065 (100, 1<<20, _('%.0f MB')),
1079 (100, 1<<20, _('%.0f MB')),
1066 (10, 1<<20, _('%.1f MB')),
1080 (10, 1<<20, _('%.1f MB')),
1067 (1, 1<<20, _('%.2f MB')),
1081 (1, 1<<20, _('%.2f MB')),
1068 (100, 1<<10, _('%.0f KB')),
1082 (100, 1<<10, _('%.0f KB')),
1069 (10, 1<<10, _('%.1f KB')),
1083 (10, 1<<10, _('%.1f KB')),
1070 (1, 1<<10, _('%.2f KB')),
1084 (1, 1<<10, _('%.2f KB')),
1071 (1, 1, _('%.0f bytes')),
1085 (1, 1, _('%.0f bytes')),
1072 )
1086 )
1073
1087
1074 for multiplier, divisor, format in units:
1088 for multiplier, divisor, format in units:
1075 if nbytes >= divisor * multiplier:
1089 if nbytes >= divisor * multiplier:
1076 return format % (nbytes / float(divisor))
1090 return format % (nbytes / float(divisor))
1077 return units[-1][2] % nbytes
1091 return units[-1][2] % nbytes
1078
1092
1079 def drop_scheme(scheme, path):
1093 def drop_scheme(scheme, path):
1080 sc = scheme + ':'
1094 sc = scheme + ':'
1081 if path.startswith(sc):
1095 if path.startswith(sc):
1082 path = path[len(sc):]
1096 path = path[len(sc):]
1083 if path.startswith('//'):
1097 if path.startswith('//'):
1084 path = path[2:]
1098 path = path[2:]
1085 return path
1099 return path
@@ -1,299 +1,301 b''
1 # util_win32.py - utility functions that use win32 API
1 # util_win32.py - utility functions that use win32 API
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
5 #
5 #
6 # This software may be used and distributed according to the terms of
6 # This software may be used and distributed according to the terms of
7 # the GNU General Public License, incorporated herein by reference.
7 # the GNU General Public License, incorporated herein by reference.
8
8
9 # Mark Hammond's win32all package allows better functionality on
9 # Mark Hammond's win32all package allows better functionality on
10 # Windows. this module overrides definitions in util.py. if not
10 # Windows. this module overrides definitions in util.py. if not
11 # available, import of this module will fail, and generic code will be
11 # available, import of this module will fail, and generic code will be
12 # used.
12 # used.
13
13
14 import win32api
14 import win32api
15
15
16 from demandload import *
16 from demandload import *
17 from i18n import gettext as _
17 from i18n import gettext as _
18 demandload(globals(), 'errno os pywintypes win32con win32file win32process')
18 demandload(globals(), 'errno os pywintypes win32con win32file win32process')
19 demandload(globals(), 'cStringIO win32com.shell:shell,shellcon winerror')
19 demandload(globals(), 'cStringIO win32com.shell:shell,shellcon winerror')
20
20
21 class WinError:
21 class WinError:
22 winerror_map = {
22 winerror_map = {
23 winerror.ERROR_ACCESS_DENIED: errno.EACCES,
23 winerror.ERROR_ACCESS_DENIED: errno.EACCES,
24 winerror.ERROR_ACCOUNT_DISABLED: errno.EACCES,
24 winerror.ERROR_ACCOUNT_DISABLED: errno.EACCES,
25 winerror.ERROR_ACCOUNT_RESTRICTION: errno.EACCES,
25 winerror.ERROR_ACCOUNT_RESTRICTION: errno.EACCES,
26 winerror.ERROR_ALREADY_ASSIGNED: errno.EBUSY,
26 winerror.ERROR_ALREADY_ASSIGNED: errno.EBUSY,
27 winerror.ERROR_ALREADY_EXISTS: errno.EEXIST,
27 winerror.ERROR_ALREADY_EXISTS: errno.EEXIST,
28 winerror.ERROR_ARITHMETIC_OVERFLOW: errno.ERANGE,
28 winerror.ERROR_ARITHMETIC_OVERFLOW: errno.ERANGE,
29 winerror.ERROR_BAD_COMMAND: errno.EIO,
29 winerror.ERROR_BAD_COMMAND: errno.EIO,
30 winerror.ERROR_BAD_DEVICE: errno.ENODEV,
30 winerror.ERROR_BAD_DEVICE: errno.ENODEV,
31 winerror.ERROR_BAD_DRIVER_LEVEL: errno.ENXIO,
31 winerror.ERROR_BAD_DRIVER_LEVEL: errno.ENXIO,
32 winerror.ERROR_BAD_EXE_FORMAT: errno.ENOEXEC,
32 winerror.ERROR_BAD_EXE_FORMAT: errno.ENOEXEC,
33 winerror.ERROR_BAD_FORMAT: errno.ENOEXEC,
33 winerror.ERROR_BAD_FORMAT: errno.ENOEXEC,
34 winerror.ERROR_BAD_LENGTH: errno.EINVAL,
34 winerror.ERROR_BAD_LENGTH: errno.EINVAL,
35 winerror.ERROR_BAD_PATHNAME: errno.ENOENT,
35 winerror.ERROR_BAD_PATHNAME: errno.ENOENT,
36 winerror.ERROR_BAD_PIPE: errno.EPIPE,
36 winerror.ERROR_BAD_PIPE: errno.EPIPE,
37 winerror.ERROR_BAD_UNIT: errno.ENODEV,
37 winerror.ERROR_BAD_UNIT: errno.ENODEV,
38 winerror.ERROR_BAD_USERNAME: errno.EINVAL,
38 winerror.ERROR_BAD_USERNAME: errno.EINVAL,
39 winerror.ERROR_BROKEN_PIPE: errno.EPIPE,
39 winerror.ERROR_BROKEN_PIPE: errno.EPIPE,
40 winerror.ERROR_BUFFER_OVERFLOW: errno.ENAMETOOLONG,
40 winerror.ERROR_BUFFER_OVERFLOW: errno.ENAMETOOLONG,
41 winerror.ERROR_BUSY: errno.EBUSY,
41 winerror.ERROR_BUSY: errno.EBUSY,
42 winerror.ERROR_BUSY_DRIVE: errno.EBUSY,
42 winerror.ERROR_BUSY_DRIVE: errno.EBUSY,
43 winerror.ERROR_CALL_NOT_IMPLEMENTED: errno.ENOSYS,
43 winerror.ERROR_CALL_NOT_IMPLEMENTED: errno.ENOSYS,
44 winerror.ERROR_CANNOT_MAKE: errno.EACCES,
44 winerror.ERROR_CANNOT_MAKE: errno.EACCES,
45 winerror.ERROR_CANTOPEN: errno.EIO,
45 winerror.ERROR_CANTOPEN: errno.EIO,
46 winerror.ERROR_CANTREAD: errno.EIO,
46 winerror.ERROR_CANTREAD: errno.EIO,
47 winerror.ERROR_CANTWRITE: errno.EIO,
47 winerror.ERROR_CANTWRITE: errno.EIO,
48 winerror.ERROR_CRC: errno.EIO,
48 winerror.ERROR_CRC: errno.EIO,
49 winerror.ERROR_CURRENT_DIRECTORY: errno.EACCES,
49 winerror.ERROR_CURRENT_DIRECTORY: errno.EACCES,
50 winerror.ERROR_DEVICE_IN_USE: errno.EBUSY,
50 winerror.ERROR_DEVICE_IN_USE: errno.EBUSY,
51 winerror.ERROR_DEV_NOT_EXIST: errno.ENODEV,
51 winerror.ERROR_DEV_NOT_EXIST: errno.ENODEV,
52 winerror.ERROR_DIRECTORY: errno.EINVAL,
52 winerror.ERROR_DIRECTORY: errno.EINVAL,
53 winerror.ERROR_DIR_NOT_EMPTY: errno.ENOTEMPTY,
53 winerror.ERROR_DIR_NOT_EMPTY: errno.ENOTEMPTY,
54 winerror.ERROR_DISK_CHANGE: errno.EIO,
54 winerror.ERROR_DISK_CHANGE: errno.EIO,
55 winerror.ERROR_DISK_FULL: errno.ENOSPC,
55 winerror.ERROR_DISK_FULL: errno.ENOSPC,
56 winerror.ERROR_DRIVE_LOCKED: errno.EBUSY,
56 winerror.ERROR_DRIVE_LOCKED: errno.EBUSY,
57 winerror.ERROR_ENVVAR_NOT_FOUND: errno.EINVAL,
57 winerror.ERROR_ENVVAR_NOT_FOUND: errno.EINVAL,
58 winerror.ERROR_EXE_MARKED_INVALID: errno.ENOEXEC,
58 winerror.ERROR_EXE_MARKED_INVALID: errno.ENOEXEC,
59 winerror.ERROR_FILENAME_EXCED_RANGE: errno.ENAMETOOLONG,
59 winerror.ERROR_FILENAME_EXCED_RANGE: errno.ENAMETOOLONG,
60 winerror.ERROR_FILE_EXISTS: errno.EEXIST,
60 winerror.ERROR_FILE_EXISTS: errno.EEXIST,
61 winerror.ERROR_FILE_INVALID: errno.ENODEV,
61 winerror.ERROR_FILE_INVALID: errno.ENODEV,
62 winerror.ERROR_FILE_NOT_FOUND: errno.ENOENT,
62 winerror.ERROR_FILE_NOT_FOUND: errno.ENOENT,
63 winerror.ERROR_GEN_FAILURE: errno.EIO,
63 winerror.ERROR_GEN_FAILURE: errno.EIO,
64 winerror.ERROR_HANDLE_DISK_FULL: errno.ENOSPC,
64 winerror.ERROR_HANDLE_DISK_FULL: errno.ENOSPC,
65 winerror.ERROR_INSUFFICIENT_BUFFER: errno.ENOMEM,
65 winerror.ERROR_INSUFFICIENT_BUFFER: errno.ENOMEM,
66 winerror.ERROR_INVALID_ACCESS: errno.EACCES,
66 winerror.ERROR_INVALID_ACCESS: errno.EACCES,
67 winerror.ERROR_INVALID_ADDRESS: errno.EFAULT,
67 winerror.ERROR_INVALID_ADDRESS: errno.EFAULT,
68 winerror.ERROR_INVALID_BLOCK: errno.EFAULT,
68 winerror.ERROR_INVALID_BLOCK: errno.EFAULT,
69 winerror.ERROR_INVALID_DATA: errno.EINVAL,
69 winerror.ERROR_INVALID_DATA: errno.EINVAL,
70 winerror.ERROR_INVALID_DRIVE: errno.ENODEV,
70 winerror.ERROR_INVALID_DRIVE: errno.ENODEV,
71 winerror.ERROR_INVALID_EXE_SIGNATURE: errno.ENOEXEC,
71 winerror.ERROR_INVALID_EXE_SIGNATURE: errno.ENOEXEC,
72 winerror.ERROR_INVALID_FLAGS: errno.EINVAL,
72 winerror.ERROR_INVALID_FLAGS: errno.EINVAL,
73 winerror.ERROR_INVALID_FUNCTION: errno.ENOSYS,
73 winerror.ERROR_INVALID_FUNCTION: errno.ENOSYS,
74 winerror.ERROR_INVALID_HANDLE: errno.EBADF,
74 winerror.ERROR_INVALID_HANDLE: errno.EBADF,
75 winerror.ERROR_INVALID_LOGON_HOURS: errno.EACCES,
75 winerror.ERROR_INVALID_LOGON_HOURS: errno.EACCES,
76 winerror.ERROR_INVALID_NAME: errno.EINVAL,
76 winerror.ERROR_INVALID_NAME: errno.EINVAL,
77 winerror.ERROR_INVALID_OWNER: errno.EINVAL,
77 winerror.ERROR_INVALID_OWNER: errno.EINVAL,
78 winerror.ERROR_INVALID_PARAMETER: errno.EINVAL,
78 winerror.ERROR_INVALID_PARAMETER: errno.EINVAL,
79 winerror.ERROR_INVALID_PASSWORD: errno.EPERM,
79 winerror.ERROR_INVALID_PASSWORD: errno.EPERM,
80 winerror.ERROR_INVALID_PRIMARY_GROUP: errno.EINVAL,
80 winerror.ERROR_INVALID_PRIMARY_GROUP: errno.EINVAL,
81 winerror.ERROR_INVALID_SIGNAL_NUMBER: errno.EINVAL,
81 winerror.ERROR_INVALID_SIGNAL_NUMBER: errno.EINVAL,
82 winerror.ERROR_INVALID_TARGET_HANDLE: errno.EIO,
82 winerror.ERROR_INVALID_TARGET_HANDLE: errno.EIO,
83 winerror.ERROR_INVALID_WORKSTATION: errno.EACCES,
83 winerror.ERROR_INVALID_WORKSTATION: errno.EACCES,
84 winerror.ERROR_IO_DEVICE: errno.EIO,
84 winerror.ERROR_IO_DEVICE: errno.EIO,
85 winerror.ERROR_IO_INCOMPLETE: errno.EINTR,
85 winerror.ERROR_IO_INCOMPLETE: errno.EINTR,
86 winerror.ERROR_LOCKED: errno.EBUSY,
86 winerror.ERROR_LOCKED: errno.EBUSY,
87 winerror.ERROR_LOCK_VIOLATION: errno.EACCES,
87 winerror.ERROR_LOCK_VIOLATION: errno.EACCES,
88 winerror.ERROR_LOGON_FAILURE: errno.EACCES,
88 winerror.ERROR_LOGON_FAILURE: errno.EACCES,
89 winerror.ERROR_MAPPED_ALIGNMENT: errno.EINVAL,
89 winerror.ERROR_MAPPED_ALIGNMENT: errno.EINVAL,
90 winerror.ERROR_META_EXPANSION_TOO_LONG: errno.E2BIG,
90 winerror.ERROR_META_EXPANSION_TOO_LONG: errno.E2BIG,
91 winerror.ERROR_MORE_DATA: errno.EPIPE,
91 winerror.ERROR_MORE_DATA: errno.EPIPE,
92 winerror.ERROR_NEGATIVE_SEEK: errno.ESPIPE,
92 winerror.ERROR_NEGATIVE_SEEK: errno.ESPIPE,
93 winerror.ERROR_NOACCESS: errno.EFAULT,
93 winerror.ERROR_NOACCESS: errno.EFAULT,
94 winerror.ERROR_NONE_MAPPED: errno.EINVAL,
94 winerror.ERROR_NONE_MAPPED: errno.EINVAL,
95 winerror.ERROR_NOT_ENOUGH_MEMORY: errno.ENOMEM,
95 winerror.ERROR_NOT_ENOUGH_MEMORY: errno.ENOMEM,
96 winerror.ERROR_NOT_READY: errno.EAGAIN,
96 winerror.ERROR_NOT_READY: errno.EAGAIN,
97 winerror.ERROR_NOT_SAME_DEVICE: errno.EXDEV,
97 winerror.ERROR_NOT_SAME_DEVICE: errno.EXDEV,
98 winerror.ERROR_NO_DATA: errno.EPIPE,
98 winerror.ERROR_NO_DATA: errno.EPIPE,
99 winerror.ERROR_NO_MORE_SEARCH_HANDLES: errno.EIO,
99 winerror.ERROR_NO_MORE_SEARCH_HANDLES: errno.EIO,
100 winerror.ERROR_NO_PROC_SLOTS: errno.EAGAIN,
100 winerror.ERROR_NO_PROC_SLOTS: errno.EAGAIN,
101 winerror.ERROR_NO_SUCH_PRIVILEGE: errno.EACCES,
101 winerror.ERROR_NO_SUCH_PRIVILEGE: errno.EACCES,
102 winerror.ERROR_OPEN_FAILED: errno.EIO,
102 winerror.ERROR_OPEN_FAILED: errno.EIO,
103 winerror.ERROR_OPEN_FILES: errno.EBUSY,
103 winerror.ERROR_OPEN_FILES: errno.EBUSY,
104 winerror.ERROR_OPERATION_ABORTED: errno.EINTR,
104 winerror.ERROR_OPERATION_ABORTED: errno.EINTR,
105 winerror.ERROR_OUTOFMEMORY: errno.ENOMEM,
105 winerror.ERROR_OUTOFMEMORY: errno.ENOMEM,
106 winerror.ERROR_PASSWORD_EXPIRED: errno.EACCES,
106 winerror.ERROR_PASSWORD_EXPIRED: errno.EACCES,
107 winerror.ERROR_PATH_BUSY: errno.EBUSY,
107 winerror.ERROR_PATH_BUSY: errno.EBUSY,
108 winerror.ERROR_PATH_NOT_FOUND: errno.ENOENT,
108 winerror.ERROR_PATH_NOT_FOUND: errno.ENOENT,
109 winerror.ERROR_PIPE_BUSY: errno.EBUSY,
109 winerror.ERROR_PIPE_BUSY: errno.EBUSY,
110 winerror.ERROR_PIPE_CONNECTED: errno.EPIPE,
110 winerror.ERROR_PIPE_CONNECTED: errno.EPIPE,
111 winerror.ERROR_PIPE_LISTENING: errno.EPIPE,
111 winerror.ERROR_PIPE_LISTENING: errno.EPIPE,
112 winerror.ERROR_PIPE_NOT_CONNECTED: errno.EPIPE,
112 winerror.ERROR_PIPE_NOT_CONNECTED: errno.EPIPE,
113 winerror.ERROR_PRIVILEGE_NOT_HELD: errno.EACCES,
113 winerror.ERROR_PRIVILEGE_NOT_HELD: errno.EACCES,
114 winerror.ERROR_READ_FAULT: errno.EIO,
114 winerror.ERROR_READ_FAULT: errno.EIO,
115 winerror.ERROR_SEEK: errno.EIO,
115 winerror.ERROR_SEEK: errno.EIO,
116 winerror.ERROR_SEEK_ON_DEVICE: errno.ESPIPE,
116 winerror.ERROR_SEEK_ON_DEVICE: errno.ESPIPE,
117 winerror.ERROR_SHARING_BUFFER_EXCEEDED: errno.ENFILE,
117 winerror.ERROR_SHARING_BUFFER_EXCEEDED: errno.ENFILE,
118 winerror.ERROR_SHARING_VIOLATION: errno.EACCES,
118 winerror.ERROR_SHARING_VIOLATION: errno.EACCES,
119 winerror.ERROR_STACK_OVERFLOW: errno.ENOMEM,
119 winerror.ERROR_STACK_OVERFLOW: errno.ENOMEM,
120 winerror.ERROR_SWAPERROR: errno.ENOENT,
120 winerror.ERROR_SWAPERROR: errno.ENOENT,
121 winerror.ERROR_TOO_MANY_MODULES: errno.EMFILE,
121 winerror.ERROR_TOO_MANY_MODULES: errno.EMFILE,
122 winerror.ERROR_TOO_MANY_OPEN_FILES: errno.EMFILE,
122 winerror.ERROR_TOO_MANY_OPEN_FILES: errno.EMFILE,
123 winerror.ERROR_UNRECOGNIZED_MEDIA: errno.ENXIO,
123 winerror.ERROR_UNRECOGNIZED_MEDIA: errno.ENXIO,
124 winerror.ERROR_UNRECOGNIZED_VOLUME: errno.ENODEV,
124 winerror.ERROR_UNRECOGNIZED_VOLUME: errno.ENODEV,
125 winerror.ERROR_WAIT_NO_CHILDREN: errno.ECHILD,
125 winerror.ERROR_WAIT_NO_CHILDREN: errno.ECHILD,
126 winerror.ERROR_WRITE_FAULT: errno.EIO,
126 winerror.ERROR_WRITE_FAULT: errno.EIO,
127 winerror.ERROR_WRITE_PROTECT: errno.EROFS,
127 winerror.ERROR_WRITE_PROTECT: errno.EROFS,
128 }
128 }
129
129
130 def __init__(self, err):
130 def __init__(self, err):
131 self.win_errno, self.win_function, self.win_strerror = err
131 self.win_errno, self.win_function, self.win_strerror = err
132 if self.win_strerror.endswith('.'):
132 if self.win_strerror.endswith('.'):
133 self.win_strerror = self.win_strerror[:-1]
133 self.win_strerror = self.win_strerror[:-1]
134
134
135 class WinIOError(WinError, IOError):
135 class WinIOError(WinError, IOError):
136 def __init__(self, err, filename=None):
136 def __init__(self, err, filename=None):
137 WinError.__init__(self, err)
137 WinError.__init__(self, err)
138 IOError.__init__(self, self.winerror_map.get(self.win_errno, 0),
138 IOError.__init__(self, self.winerror_map.get(self.win_errno, 0),
139 self.win_strerror)
139 self.win_strerror)
140 self.filename = filename
140 self.filename = filename
141
141
142 class WinOSError(WinError, OSError):
142 class WinOSError(WinError, OSError):
143 def __init__(self, err):
143 def __init__(self, err):
144 WinError.__init__(self, err)
144 WinError.__init__(self, err)
145 OSError.__init__(self, self.winerror_map.get(self.win_errno, 0),
145 OSError.__init__(self, self.winerror_map.get(self.win_errno, 0),
146 self.win_strerror)
146 self.win_strerror)
147
147
148 def os_link(src, dst):
148 def os_link(src, dst):
149 # NB will only succeed on NTFS
149 # NB will only succeed on NTFS
150 try:
150 try:
151 win32file.CreateHardLink(dst, src)
151 win32file.CreateHardLink(dst, src)
152 except pywintypes.error, details:
152 except pywintypes.error, details:
153 raise WinOSError(details)
153 raise WinOSError(details)
154
154
155 def nlinks(pathname):
155 def nlinks(pathname):
156 """Return number of hardlinks for the given file."""
156 """Return number of hardlinks for the given file."""
157 try:
157 try:
158 fh = win32file.CreateFile(pathname,
158 fh = win32file.CreateFile(pathname,
159 win32file.GENERIC_READ, win32file.FILE_SHARE_READ,
159 win32file.GENERIC_READ, win32file.FILE_SHARE_READ,
160 None, win32file.OPEN_EXISTING, 0, None)
160 None, win32file.OPEN_EXISTING, 0, None)
161 res = win32file.GetFileInformationByHandle(fh)
161 res = win32file.GetFileInformationByHandle(fh)
162 fh.Close()
162 fh.Close()
163 return res[7]
163 return res[7]
164 except pywintypes.error:
164 except pywintypes.error:
165 return os.lstat(pathname).st_nlink
165 return os.lstat(pathname).st_nlink
166
166
167 def testpid(pid):
167 def testpid(pid):
168 '''return True if pid is still running or unable to
168 '''return True if pid is still running or unable to
169 determine, False otherwise'''
169 determine, False otherwise'''
170 try:
170 try:
171 handle = win32api.OpenProcess(
171 handle = win32api.OpenProcess(
172 win32con.PROCESS_QUERY_INFORMATION, False, pid)
172 win32con.PROCESS_QUERY_INFORMATION, False, pid)
173 if handle:
173 if handle:
174 status = win32process.GetExitCodeProcess(handle)
174 status = win32process.GetExitCodeProcess(handle)
175 return status == win32con.STILL_ACTIVE
175 return status == win32con.STILL_ACTIVE
176 except pywintypes.error, details:
176 except pywintypes.error, details:
177 return details[0] != winerror.ERROR_INVALID_PARAMETER
177 return details[0] != winerror.ERROR_INVALID_PARAMETER
178 return True
178 return True
179
179
180 def system_rcpath_win32():
180 def system_rcpath_win32():
181 '''return default os-specific hgrc search path'''
181 '''return default os-specific hgrc search path'''
182 proc = win32api.GetCurrentProcess()
182 proc = win32api.GetCurrentProcess()
183 try:
183 try:
184 # This will fail on windows < NT
184 # This will fail on windows < NT
185 filename = win32process.GetModuleFileNameEx(proc, 0)
185 filename = win32process.GetModuleFileNameEx(proc, 0)
186 except:
186 except:
187 filename = win32api.GetModuleFileName(0)
187 filename = win32api.GetModuleFileName(0)
188 return [os.path.join(os.path.dirname(filename), 'mercurial.ini')]
188 return [os.path.join(os.path.dirname(filename), 'mercurial.ini')]
189
189
190 def user_rcpath():
190 def user_rcpath():
191 '''return os-specific hgrc search path to the user dir'''
191 '''return os-specific hgrc search path to the user dir'''
192 userdir = os.path.expanduser('~')
192 userdir = os.path.expanduser('~')
193 if userdir == '~':
193 if userdir == '~':
194 # We are on win < nt: fetch the APPDATA directory location and use
194 # We are on win < nt: fetch the APPDATA directory location and use
195 # the parent directory as the user home dir.
195 # the parent directory as the user home dir.
196 appdir = shell.SHGetPathFromIDList(
196 appdir = shell.SHGetPathFromIDList(
197 shell.SHGetSpecialFolderLocation(0, shellcon.CSIDL_APPDATA))
197 shell.SHGetSpecialFolderLocation(0, shellcon.CSIDL_APPDATA))
198 userdir = os.path.dirname(appdir)
198 userdir = os.path.dirname(appdir)
199 return os.path.join(userdir, 'mercurial.ini')
199 return os.path.join(userdir, 'mercurial.ini')
200
200
201 class posixfile_nt(object):
201 class posixfile_nt(object):
202 '''file object with posix-like semantics. on windows, normal
202 '''file object with posix-like semantics. on windows, normal
203 files can not be deleted or renamed if they are open. must open
203 files can not be deleted or renamed if they are open. must open
204 with win32file.FILE_SHARE_DELETE. this flag does not exist on
204 with win32file.FILE_SHARE_DELETE. this flag does not exist on
205 windows < nt, so do not use this class there.'''
205 windows < nt, so do not use this class there.'''
206
206
207 # tried to use win32file._open_osfhandle to pass fd to os.fdopen,
207 # tried to use win32file._open_osfhandle to pass fd to os.fdopen,
208 # but does not work at all. wrap win32 file api instead.
208 # but does not work at all. wrap win32 file api instead.
209
209
210 def __init__(self, name, mode='rb'):
210 def __init__(self, name, mode='rb'):
211 access = 0
211 access = 0
212 if 'r' in mode or '+' in mode:
212 if 'r' in mode or '+' in mode:
213 access |= win32file.GENERIC_READ
213 access |= win32file.GENERIC_READ
214 if 'w' in mode or 'a' in mode:
214 if 'w' in mode or 'a' in mode:
215 access |= win32file.GENERIC_WRITE
215 access |= win32file.GENERIC_WRITE
216 if 'r' in mode:
216 if 'r' in mode:
217 creation = win32file.OPEN_EXISTING
217 creation = win32file.OPEN_EXISTING
218 elif 'a' in mode:
218 elif 'a' in mode:
219 creation = win32file.OPEN_ALWAYS
219 creation = win32file.OPEN_ALWAYS
220 else:
220 else:
221 creation = win32file.CREATE_ALWAYS
221 creation = win32file.CREATE_ALWAYS
222 try:
222 try:
223 self.handle = win32file.CreateFile(name,
223 self.handle = win32file.CreateFile(name,
224 access,
224 access,
225 win32file.FILE_SHARE_READ |
225 win32file.FILE_SHARE_READ |
226 win32file.FILE_SHARE_WRITE |
226 win32file.FILE_SHARE_WRITE |
227 win32file.FILE_SHARE_DELETE,
227 win32file.FILE_SHARE_DELETE,
228 None,
228 None,
229 creation,
229 creation,
230 win32file.FILE_ATTRIBUTE_NORMAL,
230 win32file.FILE_ATTRIBUTE_NORMAL,
231 0)
231 0)
232 except pywintypes.error, err:
232 except pywintypes.error, err:
233 raise WinIOError(err, name)
233 raise WinIOError(err, name)
234 self.closed = False
234 self.closed = False
235 self.name = name
235 self.name = name
236 self.mode = mode
236 self.mode = mode
237
237
238 def __iter__(self):
238 def __iter__(self):
239 for line in self.read().splitlines(True):
239 for line in self.read().splitlines(True):
240 yield line
240 yield line
241
241
242 def read(self, count=-1):
242 def read(self, count=-1):
243 try:
243 try:
244 cs = cStringIO.StringIO()
244 cs = cStringIO.StringIO()
245 while count:
245 while count:
246 wincount = int(count)
246 wincount = int(count)
247 if wincount == -1:
247 if wincount == -1:
248 wincount = 1048576
248 wincount = 1048576
249 val, data = win32file.ReadFile(self.handle, wincount)
249 val, data = win32file.ReadFile(self.handle, wincount)
250 if not data: break
250 if not data: break
251 cs.write(data)
251 cs.write(data)
252 if count != -1:
252 if count != -1:
253 count -= len(data)
253 count -= len(data)
254 return cs.getvalue()
254 return cs.getvalue()
255 except pywintypes.error, err:
255 except pywintypes.error, err:
256 raise WinIOError(err)
256 raise WinIOError(err)
257
257
258 def write(self, data):
258 def write(self, data):
259 try:
259 try:
260 if 'a' in self.mode:
260 if 'a' in self.mode:
261 win32file.SetFilePointer(self.handle, 0, win32file.FILE_END)
261 win32file.SetFilePointer(self.handle, 0, win32file.FILE_END)
262 nwrit = 0
262 nwrit = 0
263 while nwrit < len(data):
263 while nwrit < len(data):
264 val, nwrit = win32file.WriteFile(self.handle, data)
264 val, nwrit = win32file.WriteFile(self.handle, data)
265 data = data[nwrit:]
265 data = data[nwrit:]
266 except pywintypes.error, err:
266 except pywintypes.error, err:
267 raise WinIOError(err)
267 raise WinIOError(err)
268
268
269 def seek(self, pos, whence=0):
269 def seek(self, pos, whence=0):
270 try:
270 try:
271 win32file.SetFilePointer(self.handle, int(pos), whence)
271 win32file.SetFilePointer(self.handle, int(pos), whence)
272 except pywintypes.error, err:
272 except pywintypes.error, err:
273 raise WinIOError(err)
273 raise WinIOError(err)
274
274
275 def tell(self):
275 def tell(self):
276 try:
276 try:
277 return win32file.SetFilePointer(self.handle, 0,
277 return win32file.SetFilePointer(self.handle, 0,
278 win32file.FILE_CURRENT)
278 win32file.FILE_CURRENT)
279 except pywintypes.error, err:
279 except pywintypes.error, err:
280 raise WinIOError(err)
280 raise WinIOError(err)
281
281
282 def close(self):
282 def close(self):
283 if not self.closed:
283 if not self.closed:
284 self.handle = None
284 self.handle = None
285 self.closed = True
285 self.closed = True
286
286
287 def flush(self):
287 def flush(self):
288 try:
288 try:
289 win32file.FlushFileBuffers(self.handle)
289 win32file.FlushFileBuffers(self.handle)
290 except pywintypes.error, err:
290 except pywintypes.error, err:
291 raise WinIOError(err)
291 raise WinIOError(err)
292
292
293 def truncate(self, pos=0):
293 def truncate(self, pos=0):
294 try:
294 try:
295 win32file.SetFilePointer(self.handle, int(pos),
295 win32file.SetFilePointer(self.handle, int(pos),
296 win32file.FILE_BEGIN)
296 win32file.FILE_BEGIN)
297 win32file.SetEndOfFile(self.handle)
297 win32file.SetEndOfFile(self.handle)
298 except pywintypes.error, err:
298 except pywintypes.error, err:
299 raise WinIOError(err)
299 raise WinIOError(err)
300
301 getuser_fallback = win32api.GetUserName
@@ -1,26 +1,28 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 unset HGUSER
3 unset HGUSER
4 EMAIL="My Name <myname@example.com>"
4 EMAIL="My Name <myname@example.com>"
5 export EMAIL
5 export EMAIL
6
6
7 hg init test
7 hg init test
8 cd test
8 cd test
9 touch asdf
9 touch asdf
10 hg add asdf
10 hg add asdf
11 hg commit -d '1000000 0' -m commit-1
11 hg commit -d '1000000 0' -m commit-1
12 hg tip
12 hg tip
13
13
14 unset EMAIL
14 unset EMAIL
15 echo 1 > asdf
15 echo 1234 > asdf
16 hg commit -d '1000000 0' -m commit-1
17 hg commit -d '1000000 0' -u "foo@bar.com" -m commit-1
16 hg commit -d '1000000 0' -u "foo@bar.com" -m commit-1
18 hg tip
17 hg tip
19 echo "[ui]" >> .hg/hgrc
18 echo "[ui]" >> .hg/hgrc
20 echo "username = foobar <foo@bar.com>" >> .hg/hgrc
19 echo "username = foobar <foo@bar.com>" >> .hg/hgrc
21 echo 12 > asdf
20 echo 12 > asdf
22 hg commit -d '1000000 0' -m commit-1
21 hg commit -d '1000000 0' -m commit-1
23 hg tip
22 hg tip
24 echo 1 > asdf
23 echo 1 > asdf
25 hg commit -d '1000000 0' -u "foo@bar.com" -m commit-1
24 hg commit -d '1000000 0' -u "foo@bar.com" -m commit-1
26 hg tip
25 hg tip
26 echo 123 > asdf
27 rm .hg/hgrc
28 hg commit -d '1000000 0' -m commit-1 2>&1 | sed -e "s/[^ \t]*@[^ \t]*/user@host/"
@@ -1,31 +1,25 b''
1 changeset: 0:9426b370c206
1 changeset: 0:9426b370c206
2 tag: tip
2 tag: tip
3 user: My Name <myname@example.com>
3 user: My Name <myname@example.com>
4 date: Mon Jan 12 13:46:40 1970 +0000
4 date: Mon Jan 12 13:46:40 1970 +0000
5 summary: commit-1
5 summary: commit-1
6
6
7 Please choose a commit username to be recorded in the changelog via
7 changeset: 1:4997f15a1b24
8 command line option (-u "First Last <email@example.com>"), in the
9 configuration files (hgrc), or by setting the EMAIL environment variable.
10
11 abort: No commit username specified!
12 transaction abort!
13 rollback completed
14 changeset: 1:2becd0bae6e6
15 tag: tip
8 tag: tip
16 user: foo@bar.com
9 user: foo@bar.com
17 date: Mon Jan 12 13:46:40 1970 +0000
10 date: Mon Jan 12 13:46:40 1970 +0000
18 summary: commit-1
11 summary: commit-1
19
12
20 changeset: 2:7a0176714f78
13 changeset: 2:72b8012b424e
21 tag: tip
14 tag: tip
22 user: foobar <foo@bar.com>
15 user: foobar <foo@bar.com>
23 date: Mon Jan 12 13:46:40 1970 +0000
16 date: Mon Jan 12 13:46:40 1970 +0000
24 summary: commit-1
17 summary: commit-1
25
18
26 changeset: 3:f9b58c5a6352
19 changeset: 3:35ff3067bedd
27 tag: tip
20 tag: tip
28 user: foo@bar.com
21 user: foo@bar.com
29 date: Mon Jan 12 13:46:40 1970 +0000
22 date: Mon Jan 12 13:46:40 1970 +0000
30 summary: commit-1
23 summary: commit-1
31
24
25 No username found, using user@host instead
General Comments 0
You need to be logged in to leave comments. Login now