##// END OF EJS Templates
Allow the user to specify the fallback encoding for the changelog...
Alexis S. L. Carvalho -
r3835:d1ce5461 default
parent child Browse files
Show More
@@ -1,512 +1,515
1 HGRC(5)
1 HGRC(5)
2 =======
2 =======
3 Bryan O'Sullivan <bos@serpentine.com>
3 Bryan O'Sullivan <bos@serpentine.com>
4
4
5 NAME
5 NAME
6 ----
6 ----
7 hgrc - configuration files for Mercurial
7 hgrc - configuration files for Mercurial
8
8
9 SYNOPSIS
9 SYNOPSIS
10 --------
10 --------
11
11
12 The Mercurial system uses a set of configuration files to control
12 The Mercurial system uses a set of configuration files to control
13 aspects of its behaviour.
13 aspects of its behaviour.
14
14
15 FILES
15 FILES
16 -----
16 -----
17
17
18 Mercurial reads configuration data from several files, if they exist.
18 Mercurial reads configuration data from several files, if they exist.
19 The names of these files depend on the system on which Mercurial is
19 The names of these files depend on the system on which Mercurial is
20 installed.
20 installed.
21
21
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
24 Per-installation configuration files, searched for in the
24 Per-installation configuration files, searched for in the
25 directory where Mercurial is installed. For example, if installed
25 directory where Mercurial is installed. For example, if installed
26 in /shared/tools, Mercurial will look in
26 in /shared/tools, Mercurial will look in
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 all Mercurial commands executed by any user in any directory.
28 all Mercurial commands executed by any user in any directory.
29
29
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 (Unix) /etc/mercurial/hgrc::
31 (Unix) /etc/mercurial/hgrc::
32 (Windows) C:\Mercurial\Mercurial.ini::
32 (Windows) C:\Mercurial\Mercurial.ini::
33 Per-system configuration files, for the system on which Mercurial
33 Per-system configuration files, for the system on which Mercurial
34 is running. Options in these files apply to all Mercurial
34 is running. Options in these files apply to all Mercurial
35 commands executed by any user in any directory. Options in these
35 commands executed by any user in any directory. Options in these
36 files override per-installation options.
36 files override per-installation options.
37
37
38 (Unix) $HOME/.hgrc::
38 (Unix) $HOME/.hgrc::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
40 (Windows) $HOME\Mercurial.ini::
40 (Windows) $HOME\Mercurial.ini::
41 Per-user configuration file, for the user running Mercurial.
41 Per-user configuration file, for the user running Mercurial.
42 Options in this file apply to all Mercurial commands executed by
42 Options in this file apply to all Mercurial commands executed by
43 any user in any directory. Options in this file override
43 any user in any directory. Options in this file override
44 per-installation and per-system options.
44 per-installation and per-system options.
45 On Windows system, one of these is chosen exclusively according
45 On Windows system, one of these is chosen exclusively according
46 to definition of HOME environment variable.
46 to definition of HOME environment variable.
47
47
48 (Unix, Windows) <repo>/.hg/hgrc::
48 (Unix, Windows) <repo>/.hg/hgrc::
49 Per-repository configuration options that only apply in a
49 Per-repository configuration options that only apply in a
50 particular repository. This file is not version-controlled, and
50 particular repository. This file is not version-controlled, and
51 will not get transferred during a "clone" operation. Options in
51 will not get transferred during a "clone" operation. Options in
52 this file override options in all other configuration files.
52 this file override options in all other configuration files.
53 On Unix, most of this file will be ignored if it doesn't belong
53 On Unix, most of this file will be ignored if it doesn't belong
54 to a trusted user or to a trusted group. See the documentation
54 to a trusted user or to a trusted group. See the documentation
55 for the trusted section below for more details.
55 for the trusted section below for more details.
56
56
57 SYNTAX
57 SYNTAX
58 ------
58 ------
59
59
60 A configuration file consists of sections, led by a "[section]" header
60 A configuration file consists of sections, led by a "[section]" header
61 and followed by "name: value" entries; "name=value" is also accepted.
61 and followed by "name: value" entries; "name=value" is also accepted.
62
62
63 [spam]
63 [spam]
64 eggs=ham
64 eggs=ham
65 green=
65 green=
66 eggs
66 eggs
67
67
68 Each line contains one entry. If the lines that follow are indented,
68 Each line contains one entry. If the lines that follow are indented,
69 they are treated as continuations of that entry.
69 they are treated as continuations of that entry.
70
70
71 Leading whitespace is removed from values. Empty lines are skipped.
71 Leading whitespace is removed from values. Empty lines are skipped.
72
72
73 The optional values can contain format strings which refer to other
73 The optional values can contain format strings which refer to other
74 values in the same section, or values in a special DEFAULT section.
74 values in the same section, or values in a special DEFAULT section.
75
75
76 Lines beginning with "#" or ";" are ignored and may be used to provide
76 Lines beginning with "#" or ";" are ignored and may be used to provide
77 comments.
77 comments.
78
78
79 SECTIONS
79 SECTIONS
80 --------
80 --------
81
81
82 This section describes the different sections that may appear in a
82 This section describes the different sections that may appear in a
83 Mercurial "hgrc" file, the purpose of each section, its possible
83 Mercurial "hgrc" file, the purpose of each section, its possible
84 keys, and their possible values.
84 keys, and their possible values.
85
85
86 decode/encode::
86 decode/encode::
87 Filters for transforming files on checkout/checkin. This would
87 Filters for transforming files on checkout/checkin. This would
88 typically be used for newline processing or other
88 typically be used for newline processing or other
89 localization/canonicalization of files.
89 localization/canonicalization of files.
90
90
91 Filters consist of a filter pattern followed by a filter command.
91 Filters consist of a filter pattern followed by a filter command.
92 Filter patterns are globs by default, rooted at the repository
92 Filter patterns are globs by default, rooted at the repository
93 root. For example, to match any file ending in ".txt" in the root
93 root. For example, to match any file ending in ".txt" in the root
94 directory only, use the pattern "*.txt". To match any file ending
94 directory only, use the pattern "*.txt". To match any file ending
95 in ".c" anywhere in the repository, use the pattern "**.c".
95 in ".c" anywhere in the repository, use the pattern "**.c".
96
96
97 The filter command can start with a specifier, either "pipe:" or
97 The filter command can start with a specifier, either "pipe:" or
98 "tempfile:". If no specifier is given, "pipe:" is used by default.
98 "tempfile:". If no specifier is given, "pipe:" is used by default.
99
99
100 A "pipe:" command must accept data on stdin and return the
100 A "pipe:" command must accept data on stdin and return the
101 transformed data on stdout.
101 transformed data on stdout.
102
102
103 Pipe example:
103 Pipe example:
104
104
105 [encode]
105 [encode]
106 # uncompress gzip files on checkin to improve delta compression
106 # uncompress gzip files on checkin to improve delta compression
107 # note: not necessarily a good idea, just an example
107 # note: not necessarily a good idea, just an example
108 *.gz = pipe: gunzip
108 *.gz = pipe: gunzip
109
109
110 [decode]
110 [decode]
111 # recompress gzip files when writing them to the working dir (we
111 # recompress gzip files when writing them to the working dir (we
112 # can safely omit "pipe:", because it's the default)
112 # can safely omit "pipe:", because it's the default)
113 *.gz = gzip
113 *.gz = gzip
114
114
115 A "tempfile:" command is a template. The string INFILE is replaced
115 A "tempfile:" command is a template. The string INFILE is replaced
116 with the name of a temporary file that contains the data to be
116 with the name of a temporary file that contains the data to be
117 filtered by the command. The string OUTFILE is replaced with the
117 filtered by the command. The string OUTFILE is replaced with the
118 name of an empty temporary file, where the filtered data must be
118 name of an empty temporary file, where the filtered data must be
119 written by the command.
119 written by the command.
120
120
121 NOTE: the tempfile mechanism is recommended for Windows systems,
121 NOTE: the tempfile mechanism is recommended for Windows systems,
122 where the standard shell I/O redirection operators often have
122 where the standard shell I/O redirection operators often have
123 strange effects. In particular, if you are doing line ending
123 strange effects. In particular, if you are doing line ending
124 conversion on Windows using the popular dos2unix and unix2dos
124 conversion on Windows using the popular dos2unix and unix2dos
125 programs, you *must* use the tempfile mechanism, as using pipes will
125 programs, you *must* use the tempfile mechanism, as using pipes will
126 corrupt the contents of your files.
126 corrupt the contents of your files.
127
127
128 Tempfile example:
128 Tempfile example:
129
129
130 [encode]
130 [encode]
131 # convert files to unix line ending conventions on checkin
131 # convert files to unix line ending conventions on checkin
132 **.txt = tempfile: dos2unix -n INFILE OUTFILE
132 **.txt = tempfile: dos2unix -n INFILE OUTFILE
133
133
134 [decode]
134 [decode]
135 # convert files to windows line ending conventions when writing
135 # convert files to windows line ending conventions when writing
136 # them to the working dir
136 # them to the working dir
137 **.txt = tempfile: unix2dos -n INFILE OUTFILE
137 **.txt = tempfile: unix2dos -n INFILE OUTFILE
138
138
139 defaults::
139 defaults::
140 Use the [defaults] section to define command defaults, i.e. the
140 Use the [defaults] section to define command defaults, i.e. the
141 default options/arguments to pass to the specified commands.
141 default options/arguments to pass to the specified commands.
142
142
143 The following example makes 'hg log' run in verbose mode, and
143 The following example makes 'hg log' run in verbose mode, and
144 'hg status' show only the modified files, by default.
144 'hg status' show only the modified files, by default.
145
145
146 [defaults]
146 [defaults]
147 log = -v
147 log = -v
148 status = -m
148 status = -m
149
149
150 The actual commands, instead of their aliases, must be used when
150 The actual commands, instead of their aliases, must be used when
151 defining command defaults. The command defaults will also be
151 defining command defaults. The command defaults will also be
152 applied to the aliases of the commands defined.
152 applied to the aliases of the commands defined.
153
153
154 email::
154 email::
155 Settings for extensions that send email messages.
155 Settings for extensions that send email messages.
156 from;;
156 from;;
157 Optional. Email address to use in "From" header and SMTP envelope
157 Optional. Email address to use in "From" header and SMTP envelope
158 of outgoing messages.
158 of outgoing messages.
159 to;;
159 to;;
160 Optional. Comma-separated list of recipients' email addresses.
160 Optional. Comma-separated list of recipients' email addresses.
161 cc;;
161 cc;;
162 Optional. Comma-separated list of carbon copy recipients'
162 Optional. Comma-separated list of carbon copy recipients'
163 email addresses.
163 email addresses.
164 bcc;;
164 bcc;;
165 Optional. Comma-separated list of blind carbon copy
165 Optional. Comma-separated list of blind carbon copy
166 recipients' email addresses. Cannot be set interactively.
166 recipients' email addresses. Cannot be set interactively.
167 method;;
167 method;;
168 Optional. Method to use to send email messages. If value is
168 Optional. Method to use to send email messages. If value is
169 "smtp" (default), use SMTP (see section "[smtp]" for
169 "smtp" (default), use SMTP (see section "[smtp]" for
170 configuration). Otherwise, use as name of program to run that
170 configuration). Otherwise, use as name of program to run that
171 acts like sendmail (takes "-f" option for sender, list of
171 acts like sendmail (takes "-f" option for sender, list of
172 recipients on command line, message on stdin). Normally, setting
172 recipients on command line, message on stdin). Normally, setting
173 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
173 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
174 sendmail to send messages.
174 sendmail to send messages.
175
175
176 Email example:
176 Email example:
177
177
178 [email]
178 [email]
179 from = Joseph User <joe.user@example.com>
179 from = Joseph User <joe.user@example.com>
180 method = /usr/sbin/sendmail
180 method = /usr/sbin/sendmail
181
181
182 extensions::
182 extensions::
183 Mercurial has an extension mechanism for adding new features. To
183 Mercurial has an extension mechanism for adding new features. To
184 enable an extension, create an entry for it in this section.
184 enable an extension, create an entry for it in this section.
185
185
186 If you know that the extension is already in Python's search path,
186 If you know that the extension is already in Python's search path,
187 you can give the name of the module, followed by "=", with nothing
187 you can give the name of the module, followed by "=", with nothing
188 after the "=".
188 after the "=".
189
189
190 Otherwise, give a name that you choose, followed by "=", followed by
190 Otherwise, give a name that you choose, followed by "=", followed by
191 the path to the ".py" file (including the file name extension) that
191 the path to the ".py" file (including the file name extension) that
192 defines the extension.
192 defines the extension.
193
193
194 Example for ~/.hgrc:
194 Example for ~/.hgrc:
195
195
196 [extensions]
196 [extensions]
197 # (the mq extension will get loaded from mercurial's path)
197 # (the mq extension will get loaded from mercurial's path)
198 hgext.mq =
198 hgext.mq =
199 # (this extension will get loaded from the file specified)
199 # (this extension will get loaded from the file specified)
200 myfeature = ~/.hgext/myfeature.py
200 myfeature = ~/.hgext/myfeature.py
201
201
202 hooks::
202 hooks::
203 Commands or Python functions that get automatically executed by
203 Commands or Python functions that get automatically executed by
204 various actions such as starting or finishing a commit. Multiple
204 various actions such as starting or finishing a commit. Multiple
205 hooks can be run for the same action by appending a suffix to the
205 hooks can be run for the same action by appending a suffix to the
206 action. Overriding a site-wide hook can be done by changing its
206 action. Overriding a site-wide hook can be done by changing its
207 value or setting it to an empty string.
207 value or setting it to an empty string.
208
208
209 Example .hg/hgrc:
209 Example .hg/hgrc:
210
210
211 [hooks]
211 [hooks]
212 # do not use the site-wide hook
212 # do not use the site-wide hook
213 incoming =
213 incoming =
214 incoming.email = /my/email/hook
214 incoming.email = /my/email/hook
215 incoming.autobuild = /my/build/hook
215 incoming.autobuild = /my/build/hook
216
216
217 Most hooks are run with environment variables set that give added
217 Most hooks are run with environment variables set that give added
218 useful information. For each hook below, the environment variables
218 useful information. For each hook below, the environment variables
219 it is passed are listed with names of the form "$HG_foo".
219 it is passed are listed with names of the form "$HG_foo".
220
220
221 changegroup;;
221 changegroup;;
222 Run after a changegroup has been added via push, pull or
222 Run after a changegroup has been added via push, pull or
223 unbundle. ID of the first new changeset is in $HG_NODE. URL from
223 unbundle. ID of the first new changeset is in $HG_NODE. URL from
224 which changes came is in $HG_URL.
224 which changes came is in $HG_URL.
225 commit;;
225 commit;;
226 Run after a changeset has been created in the local repository.
226 Run after a changeset has been created in the local repository.
227 ID of the newly created changeset is in $HG_NODE. Parent
227 ID of the newly created changeset is in $HG_NODE. Parent
228 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
228 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
229 incoming;;
229 incoming;;
230 Run after a changeset has been pulled, pushed, or unbundled into
230 Run after a changeset has been pulled, pushed, or unbundled into
231 the local repository. The ID of the newly arrived changeset is in
231 the local repository. The ID of the newly arrived changeset is in
232 $HG_NODE. URL that was source of changes came is in $HG_URL.
232 $HG_NODE. URL that was source of changes came is in $HG_URL.
233 outgoing;;
233 outgoing;;
234 Run after sending changes from local repository to another. ID of
234 Run after sending changes from local repository to another. ID of
235 first changeset sent is in $HG_NODE. Source of operation is in
235 first changeset sent is in $HG_NODE. Source of operation is in
236 $HG_SOURCE; see "preoutgoing" hook for description.
236 $HG_SOURCE; see "preoutgoing" hook for description.
237 prechangegroup;;
237 prechangegroup;;
238 Run before a changegroup is added via push, pull or unbundle.
238 Run before a changegroup is added via push, pull or unbundle.
239 Exit status 0 allows the changegroup to proceed. Non-zero status
239 Exit status 0 allows the changegroup to proceed. Non-zero status
240 will cause the push, pull or unbundle to fail. URL from which
240 will cause the push, pull or unbundle to fail. URL from which
241 changes will come is in $HG_URL.
241 changes will come is in $HG_URL.
242 precommit;;
242 precommit;;
243 Run before starting a local commit. Exit status 0 allows the
243 Run before starting a local commit. Exit status 0 allows the
244 commit to proceed. Non-zero status will cause the commit to fail.
244 commit to proceed. Non-zero status will cause the commit to fail.
245 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
245 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
246 preoutgoing;;
246 preoutgoing;;
247 Run before computing changes to send from the local repository to
247 Run before computing changes to send from the local repository to
248 another. Non-zero status will cause failure. This lets you
248 another. Non-zero status will cause failure. This lets you
249 prevent pull over http or ssh. Also prevents against local pull,
249 prevent pull over http or ssh. Also prevents against local pull,
250 push (outbound) or bundle commands, but not effective, since you
250 push (outbound) or bundle commands, but not effective, since you
251 can just copy files instead then. Source of operation is in
251 can just copy files instead then. Source of operation is in
252 $HG_SOURCE. If "serve", operation is happening on behalf of
252 $HG_SOURCE. If "serve", operation is happening on behalf of
253 remote ssh or http repository. If "push", "pull" or "bundle",
253 remote ssh or http repository. If "push", "pull" or "bundle",
254 operation is happening on behalf of repository on same system.
254 operation is happening on behalf of repository on same system.
255 pretag;;
255 pretag;;
256 Run before creating a tag. Exit status 0 allows the tag to be
256 Run before creating a tag. Exit status 0 allows the tag to be
257 created. Non-zero status will cause the tag to fail. ID of
257 created. Non-zero status will cause the tag to fail. ID of
258 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
258 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
259 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
259 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
260 pretxnchangegroup;;
260 pretxnchangegroup;;
261 Run after a changegroup has been added via push, pull or unbundle,
261 Run after a changegroup has been added via push, pull or unbundle,
262 but before the transaction has been committed. Changegroup is
262 but before the transaction has been committed. Changegroup is
263 visible to hook program. This lets you validate incoming changes
263 visible to hook program. This lets you validate incoming changes
264 before accepting them. Passed the ID of the first new changeset
264 before accepting them. Passed the ID of the first new changeset
265 in $HG_NODE. Exit status 0 allows the transaction to commit.
265 in $HG_NODE. Exit status 0 allows the transaction to commit.
266 Non-zero status will cause the transaction to be rolled back and
266 Non-zero status will cause the transaction to be rolled back and
267 the push, pull or unbundle will fail. URL that was source of
267 the push, pull or unbundle will fail. URL that was source of
268 changes is in $HG_URL.
268 changes is in $HG_URL.
269 pretxncommit;;
269 pretxncommit;;
270 Run after a changeset has been created but the transaction not yet
270 Run after a changeset has been created but the transaction not yet
271 committed. Changeset is visible to hook program. This lets you
271 committed. Changeset is visible to hook program. This lets you
272 validate commit message and changes. Exit status 0 allows the
272 validate commit message and changes. Exit status 0 allows the
273 commit to proceed. Non-zero status will cause the transaction to
273 commit to proceed. Non-zero status will cause the transaction to
274 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
274 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
275 IDs are in $HG_PARENT1 and $HG_PARENT2.
275 IDs are in $HG_PARENT1 and $HG_PARENT2.
276 preupdate;;
276 preupdate;;
277 Run before updating the working directory. Exit status 0 allows
277 Run before updating the working directory. Exit status 0 allows
278 the update to proceed. Non-zero status will prevent the update.
278 the update to proceed. Non-zero status will prevent the update.
279 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
279 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
280 of second new parent is in $HG_PARENT2.
280 of second new parent is in $HG_PARENT2.
281 tag;;
281 tag;;
282 Run after a tag is created. ID of tagged changeset is in
282 Run after a tag is created. ID of tagged changeset is in
283 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
283 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
284 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
284 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
285 update;;
285 update;;
286 Run after updating the working directory. Changeset ID of first
286 Run after updating the working directory. Changeset ID of first
287 new parent is in $HG_PARENT1. If merge, ID of second new parent
287 new parent is in $HG_PARENT1. If merge, ID of second new parent
288 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
288 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
289 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
289 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
290
290
291 Note: In earlier releases, the names of hook environment variables
291 Note: In earlier releases, the names of hook environment variables
292 did not have a "HG_" prefix. The old unprefixed names are no longer
292 did not have a "HG_" prefix. The old unprefixed names are no longer
293 provided in the environment.
293 provided in the environment.
294
294
295 The syntax for Python hooks is as follows:
295 The syntax for Python hooks is as follows:
296
296
297 hookname = python:modulename.submodule.callable
297 hookname = python:modulename.submodule.callable
298
298
299 Python hooks are run within the Mercurial process. Each hook is
299 Python hooks are run within the Mercurial process. Each hook is
300 called with at least three keyword arguments: a ui object (keyword
300 called with at least three keyword arguments: a ui object (keyword
301 "ui"), a repository object (keyword "repo"), and a "hooktype"
301 "ui"), a repository object (keyword "repo"), and a "hooktype"
302 keyword that tells what kind of hook is used. Arguments listed as
302 keyword that tells what kind of hook is used. Arguments listed as
303 environment variables above are passed as keyword arguments, with no
303 environment variables above are passed as keyword arguments, with no
304 "HG_" prefix, and names in lower case.
304 "HG_" prefix, and names in lower case.
305
305
306 If a Python hook returns a "true" value or raises an exception, this
306 If a Python hook returns a "true" value or raises an exception, this
307 is treated as failure of the hook.
307 is treated as failure of the hook.
308
308
309 http_proxy::
309 http_proxy::
310 Used to access web-based Mercurial repositories through a HTTP
310 Used to access web-based Mercurial repositories through a HTTP
311 proxy.
311 proxy.
312 host;;
312 host;;
313 Host name and (optional) port of the proxy server, for example
313 Host name and (optional) port of the proxy server, for example
314 "myproxy:8000".
314 "myproxy:8000".
315 no;;
315 no;;
316 Optional. Comma-separated list of host names that should bypass
316 Optional. Comma-separated list of host names that should bypass
317 the proxy.
317 the proxy.
318 passwd;;
318 passwd;;
319 Optional. Password to authenticate with at the proxy server.
319 Optional. Password to authenticate with at the proxy server.
320 user;;
320 user;;
321 Optional. User name to authenticate with at the proxy server.
321 Optional. User name to authenticate with at the proxy server.
322
322
323 smtp::
323 smtp::
324 Configuration for extensions that need to send email messages.
324 Configuration for extensions that need to send email messages.
325 host;;
325 host;;
326 Host name of mail server, e.g. "mail.example.com".
326 Host name of mail server, e.g. "mail.example.com".
327 port;;
327 port;;
328 Optional. Port to connect to on mail server. Default: 25.
328 Optional. Port to connect to on mail server. Default: 25.
329 tls;;
329 tls;;
330 Optional. Whether to connect to mail server using TLS. True or
330 Optional. Whether to connect to mail server using TLS. True or
331 False. Default: False.
331 False. Default: False.
332 username;;
332 username;;
333 Optional. User name to authenticate to SMTP server with.
333 Optional. User name to authenticate to SMTP server with.
334 If username is specified, password must also be specified.
334 If username is specified, password must also be specified.
335 Default: none.
335 Default: none.
336 password;;
336 password;;
337 Optional. Password to authenticate to SMTP server with.
337 Optional. Password to authenticate to SMTP server with.
338 If username is specified, password must also be specified.
338 If username is specified, password must also be specified.
339 Default: none.
339 Default: none.
340 local_hostname;;
340 local_hostname;;
341 Optional. It's the hostname that the sender can use to identify itself
341 Optional. It's the hostname that the sender can use to identify itself
342 to the MTA.
342 to the MTA.
343
343
344 paths::
344 paths::
345 Assigns symbolic names to repositories. The left side is the
345 Assigns symbolic names to repositories. The left side is the
346 symbolic name, and the right gives the directory or URL that is the
346 symbolic name, and the right gives the directory or URL that is the
347 location of the repository. Default paths can be declared by
347 location of the repository. Default paths can be declared by
348 setting the following entries.
348 setting the following entries.
349 default;;
349 default;;
350 Directory or URL to use when pulling if no source is specified.
350 Directory or URL to use when pulling if no source is specified.
351 Default is set to repository from which the current repository
351 Default is set to repository from which the current repository
352 was cloned.
352 was cloned.
353 default-push;;
353 default-push;;
354 Optional. Directory or URL to use when pushing if no destination
354 Optional. Directory or URL to use when pushing if no destination
355 is specified.
355 is specified.
356
356
357 server::
357 server::
358 Controls generic server settings.
358 Controls generic server settings.
359 uncompressed;;
359 uncompressed;;
360 Whether to allow clients to clone a repo using the uncompressed
360 Whether to allow clients to clone a repo using the uncompressed
361 streaming protocol. This transfers about 40% more data than a
361 streaming protocol. This transfers about 40% more data than a
362 regular clone, but uses less memory and CPU on both server and
362 regular clone, but uses less memory and CPU on both server and
363 client. Over a LAN (100Mbps or better) or a very fast WAN, an
363 client. Over a LAN (100Mbps or better) or a very fast WAN, an
364 uncompressed streaming clone is a lot faster (~10x) than a regular
364 uncompressed streaming clone is a lot faster (~10x) than a regular
365 clone. Over most WAN connections (anything slower than about
365 clone. Over most WAN connections (anything slower than about
366 6Mbps), uncompressed streaming is slower, because of the extra
366 6Mbps), uncompressed streaming is slower, because of the extra
367 data transfer overhead. Default is False.
367 data transfer overhead. Default is False.
368
368
369 trusted::
369 trusted::
370 For security reasons, Mercurial will not use the settings in
370 For security reasons, Mercurial will not use the settings in
371 the .hg/hgrc file from a repository if it doesn't belong to a
371 the .hg/hgrc file from a repository if it doesn't belong to a
372 trusted user or to a trusted group. The main exception is the
372 trusted user or to a trusted group. The main exception is the
373 web interface, which automatically uses some safe settings, since
373 web interface, which automatically uses some safe settings, since
374 it's common to serve repositories from different users.
374 it's common to serve repositories from different users.
375
375
376 This section specifies what users and groups are trusted. The
376 This section specifies what users and groups are trusted. The
377 current user is always trusted. To trust everybody, list a user
377 current user is always trusted. To trust everybody, list a user
378 or a group with name "*".
378 or a group with name "*".
379
379
380 users;;
380 users;;
381 Comma-separated list of trusted users.
381 Comma-separated list of trusted users.
382 groups;;
382 groups;;
383 Comma-separated list of trusted groups.
383 Comma-separated list of trusted groups.
384
384
385 ui::
385 ui::
386 User interface controls.
386 User interface controls.
387 debug;;
387 debug;;
388 Print debugging information. True or False. Default is False.
388 Print debugging information. True or False. Default is False.
389 editor;;
389 editor;;
390 The editor to use during a commit. Default is $EDITOR or "vi".
390 The editor to use during a commit. Default is $EDITOR or "vi".
391 fallbackencoding;;
392 Encoding to try if it's not possible to decode the changelog using
393 UTF-8. Default is ISO-8859-1.
391 ignore;;
394 ignore;;
392 A file to read per-user ignore patterns from. This file should be in
395 A file to read per-user ignore patterns from. This file should be in
393 the same format as a repository-wide .hgignore file. This option
396 the same format as a repository-wide .hgignore file. This option
394 supports hook syntax, so if you want to specify multiple ignore
397 supports hook syntax, so if you want to specify multiple ignore
395 files, you can do so by setting something like
398 files, you can do so by setting something like
396 "ignore.other = ~/.hgignore2". For details of the ignore file
399 "ignore.other = ~/.hgignore2". For details of the ignore file
397 format, see the hgignore(5) man page.
400 format, see the hgignore(5) man page.
398 interactive;;
401 interactive;;
399 Allow to prompt the user. True or False. Default is True.
402 Allow to prompt the user. True or False. Default is True.
400 logtemplate;;
403 logtemplate;;
401 Template string for commands that print changesets.
404 Template string for commands that print changesets.
402 style;;
405 style;;
403 Name of style to use for command output.
406 Name of style to use for command output.
404 merge;;
407 merge;;
405 The conflict resolution program to use during a manual merge.
408 The conflict resolution program to use during a manual merge.
406 Default is "hgmerge".
409 Default is "hgmerge".
407 quiet;;
410 quiet;;
408 Reduce the amount of output printed. True or False. Default is False.
411 Reduce the amount of output printed. True or False. Default is False.
409 remotecmd;;
412 remotecmd;;
410 remote command to use for clone/push/pull operations. Default is 'hg'.
413 remote command to use for clone/push/pull operations. Default is 'hg'.
411 ssh;;
414 ssh;;
412 command to use for SSH connections. Default is 'ssh'.
415 command to use for SSH connections. Default is 'ssh'.
413 strict;;
416 strict;;
414 Require exact command names, instead of allowing unambiguous
417 Require exact command names, instead of allowing unambiguous
415 abbreviations. True or False. Default is False.
418 abbreviations. True or False. Default is False.
416 timeout;;
419 timeout;;
417 The timeout used when a lock is held (in seconds), a negative value
420 The timeout used when a lock is held (in seconds), a negative value
418 means no timeout. Default is 600.
421 means no timeout. Default is 600.
419 username;;
422 username;;
420 The committer of a changeset created when running "commit".
423 The committer of a changeset created when running "commit".
421 Typically a person's name and email address, e.g. "Fred Widget
424 Typically a person's name and email address, e.g. "Fred Widget
422 <fred@example.com>". Default is $EMAIL or username@hostname.
425 <fred@example.com>". Default is $EMAIL or username@hostname.
423 verbose;;
426 verbose;;
424 Increase the amount of output printed. True or False. Default is False.
427 Increase the amount of output printed. True or False. Default is False.
425
428
426
429
427 web::
430 web::
428 Web interface configuration.
431 Web interface configuration.
429 accesslog;;
432 accesslog;;
430 Where to output the access log. Default is stdout.
433 Where to output the access log. Default is stdout.
431 address;;
434 address;;
432 Interface address to bind to. Default is all.
435 Interface address to bind to. Default is all.
433 allow_archive;;
436 allow_archive;;
434 List of archive format (bz2, gz, zip) allowed for downloading.
437 List of archive format (bz2, gz, zip) allowed for downloading.
435 Default is empty.
438 Default is empty.
436 allowbz2;;
439 allowbz2;;
437 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
440 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
438 Default is false.
441 Default is false.
439 allowgz;;
442 allowgz;;
440 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
443 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
441 Default is false.
444 Default is false.
442 allowpull;;
445 allowpull;;
443 Whether to allow pulling from the repository. Default is true.
446 Whether to allow pulling from the repository. Default is true.
444 allow_push;;
447 allow_push;;
445 Whether to allow pushing to the repository. If empty or not set,
448 Whether to allow pushing to the repository. If empty or not set,
446 push is not allowed. If the special value "*", any remote user
449 push is not allowed. If the special value "*", any remote user
447 can push, including unauthenticated users. Otherwise, the remote
450 can push, including unauthenticated users. Otherwise, the remote
448 user must have been authenticated, and the authenticated user name
451 user must have been authenticated, and the authenticated user name
449 must be present in this list (separated by whitespace or ",").
452 must be present in this list (separated by whitespace or ",").
450 The contents of the allow_push list are examined after the
453 The contents of the allow_push list are examined after the
451 deny_push list.
454 deny_push list.
452 allowzip;;
455 allowzip;;
453 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
456 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
454 Default is false. This feature creates temporary files.
457 Default is false. This feature creates temporary files.
455 baseurl;;
458 baseurl;;
456 Base URL to use when publishing URLs in other locations, so
459 Base URL to use when publishing URLs in other locations, so
457 third-party tools like email notification hooks can construct URLs.
460 third-party tools like email notification hooks can construct URLs.
458 Example: "http://hgserver/repos/"
461 Example: "http://hgserver/repos/"
459 contact;;
462 contact;;
460 Name or email address of the person in charge of the repository.
463 Name or email address of the person in charge of the repository.
461 Default is "unknown".
464 Default is "unknown".
462 deny_push;;
465 deny_push;;
463 Whether to deny pushing to the repository. If empty or not set,
466 Whether to deny pushing to the repository. If empty or not set,
464 push is not denied. If the special value "*", all remote users
467 push is not denied. If the special value "*", all remote users
465 are denied push. Otherwise, unauthenticated users are all denied,
468 are denied push. Otherwise, unauthenticated users are all denied,
466 and any authenticated user name present in this list (separated by
469 and any authenticated user name present in this list (separated by
467 whitespace or ",") is also denied. The contents of the deny_push
470 whitespace or ",") is also denied. The contents of the deny_push
468 list are examined before the allow_push list.
471 list are examined before the allow_push list.
469 description;;
472 description;;
470 Textual description of the repository's purpose or contents.
473 Textual description of the repository's purpose or contents.
471 Default is "unknown".
474 Default is "unknown".
472 errorlog;;
475 errorlog;;
473 Where to output the error log. Default is stderr.
476 Where to output the error log. Default is stderr.
474 ipv6;;
477 ipv6;;
475 Whether to use IPv6. Default is false.
478 Whether to use IPv6. Default is false.
476 name;;
479 name;;
477 Repository name to use in the web interface. Default is current
480 Repository name to use in the web interface. Default is current
478 working directory.
481 working directory.
479 maxchanges;;
482 maxchanges;;
480 Maximum number of changes to list on the changelog. Default is 10.
483 Maximum number of changes to list on the changelog. Default is 10.
481 maxfiles;;
484 maxfiles;;
482 Maximum number of files to list per changeset. Default is 10.
485 Maximum number of files to list per changeset. Default is 10.
483 port;;
486 port;;
484 Port to listen on. Default is 8000.
487 Port to listen on. Default is 8000.
485 push_ssl;;
488 push_ssl;;
486 Whether to require that inbound pushes be transported over SSL to
489 Whether to require that inbound pushes be transported over SSL to
487 prevent password sniffing. Default is true.
490 prevent password sniffing. Default is true.
488 stripes;;
491 stripes;;
489 How many lines a "zebra stripe" should span in multiline output.
492 How many lines a "zebra stripe" should span in multiline output.
490 Default is 1; set to 0 to disable.
493 Default is 1; set to 0 to disable.
491 style;;
494 style;;
492 Which template map style to use.
495 Which template map style to use.
493 templates;;
496 templates;;
494 Where to find the HTML templates. Default is install path.
497 Where to find the HTML templates. Default is install path.
495
498
496
499
497 AUTHOR
500 AUTHOR
498 ------
501 ------
499 Bryan O'Sullivan <bos@serpentine.com>.
502 Bryan O'Sullivan <bos@serpentine.com>.
500
503
501 Mercurial was written by Matt Mackall <mpm@selenic.com>.
504 Mercurial was written by Matt Mackall <mpm@selenic.com>.
502
505
503 SEE ALSO
506 SEE ALSO
504 --------
507 --------
505 hg(1), hgignore(5)
508 hg(1), hgignore(5)
506
509
507 COPYING
510 COPYING
508 -------
511 -------
509 This manual page is copyright 2005 Bryan O'Sullivan.
512 This manual page is copyright 2005 Bryan O'Sullivan.
510 Mercurial is copyright 2005, 2006 Matt Mackall.
513 Mercurial is copyright 2005, 2006 Matt Mackall.
511 Free use of this software is granted under the terms of the GNU General
514 Free use of this software is granted under the terms of the GNU General
512 Public License (GPL).
515 Public License (GPL).
@@ -1,1931 +1,1935
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import *
8 from node import *
9 from i18n import gettext as _
9 from i18n import gettext as _
10 from demandload import *
10 from demandload import *
11 import repo
11 import repo
12 demandload(globals(), "appendfile changegroup")
12 demandload(globals(), "appendfile changegroup")
13 demandload(globals(), "changelog dirstate filelog manifest context")
13 demandload(globals(), "changelog dirstate filelog manifest context")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
15 demandload(globals(), "os revlog time util")
15 demandload(globals(), "os revlog time util")
16
16
17 class localrepository(repo.repository):
17 class localrepository(repo.repository):
18 capabilities = ('lookup', 'changegroupsubset')
18 capabilities = ('lookup', 'changegroupsubset')
19
19
20 def __del__(self):
20 def __del__(self):
21 self.transhandle = None
21 self.transhandle = None
22 def __init__(self, parentui, path=None, create=0):
22 def __init__(self, parentui, path=None, create=0):
23 repo.repository.__init__(self)
23 repo.repository.__init__(self)
24 if not path:
24 if not path:
25 p = os.getcwd()
25 p = os.getcwd()
26 while not os.path.isdir(os.path.join(p, ".hg")):
26 while not os.path.isdir(os.path.join(p, ".hg")):
27 oldp = p
27 oldp = p
28 p = os.path.dirname(p)
28 p = os.path.dirname(p)
29 if p == oldp:
29 if p == oldp:
30 raise repo.RepoError(_("There is no Mercurial repository"
30 raise repo.RepoError(_("There is no Mercurial repository"
31 " here (.hg not found)"))
31 " here (.hg not found)"))
32 path = p
32 path = p
33 self.path = os.path.join(path, ".hg")
33 self.path = os.path.join(path, ".hg")
34 self.spath = self.path
34 self.spath = self.path
35
35
36 if not os.path.isdir(self.path):
36 if not os.path.isdir(self.path):
37 if create:
37 if create:
38 if not os.path.exists(path):
38 if not os.path.exists(path):
39 os.mkdir(path)
39 os.mkdir(path)
40 os.mkdir(self.path)
40 os.mkdir(self.path)
41 if self.spath != self.path:
41 if self.spath != self.path:
42 os.mkdir(self.spath)
42 os.mkdir(self.spath)
43 else:
43 else:
44 raise repo.RepoError(_("repository %s not found") % path)
44 raise repo.RepoError(_("repository %s not found") % path)
45 elif create:
45 elif create:
46 raise repo.RepoError(_("repository %s already exists") % path)
46 raise repo.RepoError(_("repository %s already exists") % path)
47
47
48 self.root = os.path.realpath(path)
48 self.root = os.path.realpath(path)
49 self.origroot = path
49 self.origroot = path
50 self.ui = ui.ui(parentui=parentui)
50 self.ui = ui.ui(parentui=parentui)
51 self.opener = util.opener(self.path)
51 self.opener = util.opener(self.path)
52 self.sopener = util.opener(self.spath)
52 self.sopener = util.opener(self.spath)
53 self.wopener = util.opener(self.root)
53 self.wopener = util.opener(self.root)
54
54
55 try:
55 try:
56 self.ui.readconfig(self.join("hgrc"), self.root)
56 self.ui.readconfig(self.join("hgrc"), self.root)
57 except IOError:
57 except IOError:
58 pass
58 pass
59
59
60 v = self.ui.configrevlog()
60 v = self.ui.configrevlog()
61 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
61 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
62 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
62 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
63 fl = v.get('flags', None)
63 fl = v.get('flags', None)
64 flags = 0
64 flags = 0
65 if fl != None:
65 if fl != None:
66 for x in fl.split():
66 for x in fl.split():
67 flags |= revlog.flagstr(x)
67 flags |= revlog.flagstr(x)
68 elif self.revlogv1:
68 elif self.revlogv1:
69 flags = revlog.REVLOG_DEFAULT_FLAGS
69 flags = revlog.REVLOG_DEFAULT_FLAGS
70
70
71 v = self.revlogversion | flags
71 v = self.revlogversion | flags
72 self.manifest = manifest.manifest(self.sopener, v)
72 self.manifest = manifest.manifest(self.sopener, v)
73 self.changelog = changelog.changelog(self.sopener, v)
73 self.changelog = changelog.changelog(self.sopener, v)
74
74
75 fallback = self.ui.config('ui', 'fallbackencoding')
76 if fallback:
77 util._fallbackencoding = fallback
78
75 # the changelog might not have the inline index flag
79 # the changelog might not have the inline index flag
76 # on. If the format of the changelog is the same as found in
80 # on. If the format of the changelog is the same as found in
77 # .hgrc, apply any flags found in the .hgrc as well.
81 # .hgrc, apply any flags found in the .hgrc as well.
78 # Otherwise, just version from the changelog
82 # Otherwise, just version from the changelog
79 v = self.changelog.version
83 v = self.changelog.version
80 if v == self.revlogversion:
84 if v == self.revlogversion:
81 v |= flags
85 v |= flags
82 self.revlogversion = v
86 self.revlogversion = v
83
87
84 self.tagscache = None
88 self.tagscache = None
85 self.branchcache = None
89 self.branchcache = None
86 self.nodetagscache = None
90 self.nodetagscache = None
87 self.encodepats = None
91 self.encodepats = None
88 self.decodepats = None
92 self.decodepats = None
89 self.transhandle = None
93 self.transhandle = None
90
94
91 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
95 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
92
96
93 def url(self):
97 def url(self):
94 return 'file:' + self.root
98 return 'file:' + self.root
95
99
96 def hook(self, name, throw=False, **args):
100 def hook(self, name, throw=False, **args):
97 def callhook(hname, funcname):
101 def callhook(hname, funcname):
98 '''call python hook. hook is callable object, looked up as
102 '''call python hook. hook is callable object, looked up as
99 name in python module. if callable returns "true", hook
103 name in python module. if callable returns "true", hook
100 fails, else passes. if hook raises exception, treated as
104 fails, else passes. if hook raises exception, treated as
101 hook failure. exception propagates if throw is "true".
105 hook failure. exception propagates if throw is "true".
102
106
103 reason for "true" meaning "hook failed" is so that
107 reason for "true" meaning "hook failed" is so that
104 unmodified commands (e.g. mercurial.commands.update) can
108 unmodified commands (e.g. mercurial.commands.update) can
105 be run as hooks without wrappers to convert return values.'''
109 be run as hooks without wrappers to convert return values.'''
106
110
107 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
111 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
108 d = funcname.rfind('.')
112 d = funcname.rfind('.')
109 if d == -1:
113 if d == -1:
110 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
114 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
111 % (hname, funcname))
115 % (hname, funcname))
112 modname = funcname[:d]
116 modname = funcname[:d]
113 try:
117 try:
114 obj = __import__(modname)
118 obj = __import__(modname)
115 except ImportError:
119 except ImportError:
116 try:
120 try:
117 # extensions are loaded with hgext_ prefix
121 # extensions are loaded with hgext_ prefix
118 obj = __import__("hgext_%s" % modname)
122 obj = __import__("hgext_%s" % modname)
119 except ImportError:
123 except ImportError:
120 raise util.Abort(_('%s hook is invalid '
124 raise util.Abort(_('%s hook is invalid '
121 '(import of "%s" failed)') %
125 '(import of "%s" failed)') %
122 (hname, modname))
126 (hname, modname))
123 try:
127 try:
124 for p in funcname.split('.')[1:]:
128 for p in funcname.split('.')[1:]:
125 obj = getattr(obj, p)
129 obj = getattr(obj, p)
126 except AttributeError, err:
130 except AttributeError, err:
127 raise util.Abort(_('%s hook is invalid '
131 raise util.Abort(_('%s hook is invalid '
128 '("%s" is not defined)') %
132 '("%s" is not defined)') %
129 (hname, funcname))
133 (hname, funcname))
130 if not callable(obj):
134 if not callable(obj):
131 raise util.Abort(_('%s hook is invalid '
135 raise util.Abort(_('%s hook is invalid '
132 '("%s" is not callable)') %
136 '("%s" is not callable)') %
133 (hname, funcname))
137 (hname, funcname))
134 try:
138 try:
135 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
139 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
136 except (KeyboardInterrupt, util.SignalInterrupt):
140 except (KeyboardInterrupt, util.SignalInterrupt):
137 raise
141 raise
138 except Exception, exc:
142 except Exception, exc:
139 if isinstance(exc, util.Abort):
143 if isinstance(exc, util.Abort):
140 self.ui.warn(_('error: %s hook failed: %s\n') %
144 self.ui.warn(_('error: %s hook failed: %s\n') %
141 (hname, exc.args[0]))
145 (hname, exc.args[0]))
142 else:
146 else:
143 self.ui.warn(_('error: %s hook raised an exception: '
147 self.ui.warn(_('error: %s hook raised an exception: '
144 '%s\n') % (hname, exc))
148 '%s\n') % (hname, exc))
145 if throw:
149 if throw:
146 raise
150 raise
147 self.ui.print_exc()
151 self.ui.print_exc()
148 return True
152 return True
149 if r:
153 if r:
150 if throw:
154 if throw:
151 raise util.Abort(_('%s hook failed') % hname)
155 raise util.Abort(_('%s hook failed') % hname)
152 self.ui.warn(_('warning: %s hook failed\n') % hname)
156 self.ui.warn(_('warning: %s hook failed\n') % hname)
153 return r
157 return r
154
158
155 def runhook(name, cmd):
159 def runhook(name, cmd):
156 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
160 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
157 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
161 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
158 r = util.system(cmd, environ=env, cwd=self.root)
162 r = util.system(cmd, environ=env, cwd=self.root)
159 if r:
163 if r:
160 desc, r = util.explain_exit(r)
164 desc, r = util.explain_exit(r)
161 if throw:
165 if throw:
162 raise util.Abort(_('%s hook %s') % (name, desc))
166 raise util.Abort(_('%s hook %s') % (name, desc))
163 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
167 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
164 return r
168 return r
165
169
166 r = False
170 r = False
167 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
171 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
168 if hname.split(".", 1)[0] == name and cmd]
172 if hname.split(".", 1)[0] == name and cmd]
169 hooks.sort()
173 hooks.sort()
170 for hname, cmd in hooks:
174 for hname, cmd in hooks:
171 if cmd.startswith('python:'):
175 if cmd.startswith('python:'):
172 r = callhook(hname, cmd[7:].strip()) or r
176 r = callhook(hname, cmd[7:].strip()) or r
173 else:
177 else:
174 r = runhook(hname, cmd) or r
178 r = runhook(hname, cmd) or r
175 return r
179 return r
176
180
177 tag_disallowed = ':\r\n'
181 tag_disallowed = ':\r\n'
178
182
179 def tag(self, name, node, message, local, user, date):
183 def tag(self, name, node, message, local, user, date):
180 '''tag a revision with a symbolic name.
184 '''tag a revision with a symbolic name.
181
185
182 if local is True, the tag is stored in a per-repository file.
186 if local is True, the tag is stored in a per-repository file.
183 otherwise, it is stored in the .hgtags file, and a new
187 otherwise, it is stored in the .hgtags file, and a new
184 changeset is committed with the change.
188 changeset is committed with the change.
185
189
186 keyword arguments:
190 keyword arguments:
187
191
188 local: whether to store tag in non-version-controlled file
192 local: whether to store tag in non-version-controlled file
189 (default False)
193 (default False)
190
194
191 message: commit message to use if committing
195 message: commit message to use if committing
192
196
193 user: name of user to use if committing
197 user: name of user to use if committing
194
198
195 date: date tuple to use if committing'''
199 date: date tuple to use if committing'''
196
200
197 for c in self.tag_disallowed:
201 for c in self.tag_disallowed:
198 if c in name:
202 if c in name:
199 raise util.Abort(_('%r cannot be used in a tag name') % c)
203 raise util.Abort(_('%r cannot be used in a tag name') % c)
200
204
201 self.hook('pretag', throw=True, node=hex(node), tag=name, local=local)
205 self.hook('pretag', throw=True, node=hex(node), tag=name, local=local)
202
206
203 if local:
207 if local:
204 # local tags are stored in the current charset
208 # local tags are stored in the current charset
205 self.opener('localtags', 'a').write('%s %s\n' % (hex(node), name))
209 self.opener('localtags', 'a').write('%s %s\n' % (hex(node), name))
206 self.hook('tag', node=hex(node), tag=name, local=local)
210 self.hook('tag', node=hex(node), tag=name, local=local)
207 return
211 return
208
212
209 for x in self.status()[:5]:
213 for x in self.status()[:5]:
210 if '.hgtags' in x:
214 if '.hgtags' in x:
211 raise util.Abort(_('working copy of .hgtags is changed '
215 raise util.Abort(_('working copy of .hgtags is changed '
212 '(please commit .hgtags manually)'))
216 '(please commit .hgtags manually)'))
213
217
214 # committed tags are stored in UTF-8
218 # committed tags are stored in UTF-8
215 line = '%s %s\n' % (hex(node), util.fromlocal(name))
219 line = '%s %s\n' % (hex(node), util.fromlocal(name))
216 self.wfile('.hgtags', 'ab').write(line)
220 self.wfile('.hgtags', 'ab').write(line)
217 if self.dirstate.state('.hgtags') == '?':
221 if self.dirstate.state('.hgtags') == '?':
218 self.add(['.hgtags'])
222 self.add(['.hgtags'])
219
223
220 self.commit(['.hgtags'], message, user, date)
224 self.commit(['.hgtags'], message, user, date)
221 self.hook('tag', node=hex(node), tag=name, local=local)
225 self.hook('tag', node=hex(node), tag=name, local=local)
222
226
223 def tags(self):
227 def tags(self):
224 '''return a mapping of tag to node'''
228 '''return a mapping of tag to node'''
225 if not self.tagscache:
229 if not self.tagscache:
226 self.tagscache = {}
230 self.tagscache = {}
227
231
228 def parsetag(line, context):
232 def parsetag(line, context):
229 if not line:
233 if not line:
230 return
234 return
231 s = l.split(" ", 1)
235 s = l.split(" ", 1)
232 if len(s) != 2:
236 if len(s) != 2:
233 self.ui.warn(_("%s: cannot parse entry\n") % context)
237 self.ui.warn(_("%s: cannot parse entry\n") % context)
234 return
238 return
235 node, key = s
239 node, key = s
236 key = util.tolocal(key.strip()) # stored in UTF-8
240 key = util.tolocal(key.strip()) # stored in UTF-8
237 try:
241 try:
238 bin_n = bin(node)
242 bin_n = bin(node)
239 except TypeError:
243 except TypeError:
240 self.ui.warn(_("%s: node '%s' is not well formed\n") %
244 self.ui.warn(_("%s: node '%s' is not well formed\n") %
241 (context, node))
245 (context, node))
242 return
246 return
243 if bin_n not in self.changelog.nodemap:
247 if bin_n not in self.changelog.nodemap:
244 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
248 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
245 (context, key))
249 (context, key))
246 return
250 return
247 self.tagscache[key] = bin_n
251 self.tagscache[key] = bin_n
248
252
249 # read the tags file from each head, ending with the tip,
253 # read the tags file from each head, ending with the tip,
250 # and add each tag found to the map, with "newer" ones
254 # and add each tag found to the map, with "newer" ones
251 # taking precedence
255 # taking precedence
252 f = None
256 f = None
253 for rev, node, fnode in self._hgtagsnodes():
257 for rev, node, fnode in self._hgtagsnodes():
254 f = (f and f.filectx(fnode) or
258 f = (f and f.filectx(fnode) or
255 self.filectx('.hgtags', fileid=fnode))
259 self.filectx('.hgtags', fileid=fnode))
256 count = 0
260 count = 0
257 for l in f.data().splitlines():
261 for l in f.data().splitlines():
258 count += 1
262 count += 1
259 parsetag(l, _("%s, line %d") % (str(f), count))
263 parsetag(l, _("%s, line %d") % (str(f), count))
260
264
261 try:
265 try:
262 f = self.opener("localtags")
266 f = self.opener("localtags")
263 count = 0
267 count = 0
264 for l in f:
268 for l in f:
265 # localtags are stored in the local character set
269 # localtags are stored in the local character set
266 # while the internal tag table is stored in UTF-8
270 # while the internal tag table is stored in UTF-8
267 l = util.fromlocal(l)
271 l = util.fromlocal(l)
268 count += 1
272 count += 1
269 parsetag(l, _("localtags, line %d") % count)
273 parsetag(l, _("localtags, line %d") % count)
270 except IOError:
274 except IOError:
271 pass
275 pass
272
276
273 self.tagscache['tip'] = self.changelog.tip()
277 self.tagscache['tip'] = self.changelog.tip()
274
278
275 return self.tagscache
279 return self.tagscache
276
280
277 def _hgtagsnodes(self):
281 def _hgtagsnodes(self):
278 heads = self.heads()
282 heads = self.heads()
279 heads.reverse()
283 heads.reverse()
280 last = {}
284 last = {}
281 ret = []
285 ret = []
282 for node in heads:
286 for node in heads:
283 c = self.changectx(node)
287 c = self.changectx(node)
284 rev = c.rev()
288 rev = c.rev()
285 try:
289 try:
286 fnode = c.filenode('.hgtags')
290 fnode = c.filenode('.hgtags')
287 except repo.LookupError:
291 except repo.LookupError:
288 continue
292 continue
289 ret.append((rev, node, fnode))
293 ret.append((rev, node, fnode))
290 if fnode in last:
294 if fnode in last:
291 ret[last[fnode]] = None
295 ret[last[fnode]] = None
292 last[fnode] = len(ret) - 1
296 last[fnode] = len(ret) - 1
293 return [item for item in ret if item]
297 return [item for item in ret if item]
294
298
295 def tagslist(self):
299 def tagslist(self):
296 '''return a list of tags ordered by revision'''
300 '''return a list of tags ordered by revision'''
297 l = []
301 l = []
298 for t, n in self.tags().items():
302 for t, n in self.tags().items():
299 try:
303 try:
300 r = self.changelog.rev(n)
304 r = self.changelog.rev(n)
301 except:
305 except:
302 r = -2 # sort to the beginning of the list if unknown
306 r = -2 # sort to the beginning of the list if unknown
303 l.append((r, t, n))
307 l.append((r, t, n))
304 l.sort()
308 l.sort()
305 return [(t, n) for r, t, n in l]
309 return [(t, n) for r, t, n in l]
306
310
307 def nodetags(self, node):
311 def nodetags(self, node):
308 '''return the tags associated with a node'''
312 '''return the tags associated with a node'''
309 if not self.nodetagscache:
313 if not self.nodetagscache:
310 self.nodetagscache = {}
314 self.nodetagscache = {}
311 for t, n in self.tags().items():
315 for t, n in self.tags().items():
312 self.nodetagscache.setdefault(n, []).append(t)
316 self.nodetagscache.setdefault(n, []).append(t)
313 return self.nodetagscache.get(node, [])
317 return self.nodetagscache.get(node, [])
314
318
315 def _branchtags(self):
319 def _branchtags(self):
316 partial, last, lrev = self._readbranchcache()
320 partial, last, lrev = self._readbranchcache()
317
321
318 tiprev = self.changelog.count() - 1
322 tiprev = self.changelog.count() - 1
319 if lrev != tiprev:
323 if lrev != tiprev:
320 self._updatebranchcache(partial, lrev+1, tiprev+1)
324 self._updatebranchcache(partial, lrev+1, tiprev+1)
321 self._writebranchcache(partial, self.changelog.tip(), tiprev)
325 self._writebranchcache(partial, self.changelog.tip(), tiprev)
322
326
323 return partial
327 return partial
324
328
325 def branchtags(self):
329 def branchtags(self):
326 if self.branchcache is not None:
330 if self.branchcache is not None:
327 return self.branchcache
331 return self.branchcache
328
332
329 self.branchcache = {} # avoid recursion in changectx
333 self.branchcache = {} # avoid recursion in changectx
330 partial = self._branchtags()
334 partial = self._branchtags()
331
335
332 # the branch cache is stored on disk as UTF-8, but in the local
336 # the branch cache is stored on disk as UTF-8, but in the local
333 # charset internally
337 # charset internally
334 for k, v in partial.items():
338 for k, v in partial.items():
335 self.branchcache[util.tolocal(k)] = v
339 self.branchcache[util.tolocal(k)] = v
336 return self.branchcache
340 return self.branchcache
337
341
338 def _readbranchcache(self):
342 def _readbranchcache(self):
339 partial = {}
343 partial = {}
340 try:
344 try:
341 f = self.opener("branches.cache")
345 f = self.opener("branches.cache")
342 lines = f.read().split('\n')
346 lines = f.read().split('\n')
343 f.close()
347 f.close()
344 last, lrev = lines.pop(0).rstrip().split(" ", 1)
348 last, lrev = lines.pop(0).rstrip().split(" ", 1)
345 last, lrev = bin(last), int(lrev)
349 last, lrev = bin(last), int(lrev)
346 if not (lrev < self.changelog.count() and
350 if not (lrev < self.changelog.count() and
347 self.changelog.node(lrev) == last): # sanity check
351 self.changelog.node(lrev) == last): # sanity check
348 # invalidate the cache
352 # invalidate the cache
349 raise ValueError('Invalid branch cache: unknown tip')
353 raise ValueError('Invalid branch cache: unknown tip')
350 for l in lines:
354 for l in lines:
351 if not l: continue
355 if not l: continue
352 node, label = l.rstrip().split(" ", 1)
356 node, label = l.rstrip().split(" ", 1)
353 partial[label] = bin(node)
357 partial[label] = bin(node)
354 except (KeyboardInterrupt, util.SignalInterrupt):
358 except (KeyboardInterrupt, util.SignalInterrupt):
355 raise
359 raise
356 except Exception, inst:
360 except Exception, inst:
357 if self.ui.debugflag:
361 if self.ui.debugflag:
358 self.ui.warn(str(inst), '\n')
362 self.ui.warn(str(inst), '\n')
359 partial, last, lrev = {}, nullid, nullrev
363 partial, last, lrev = {}, nullid, nullrev
360 return partial, last, lrev
364 return partial, last, lrev
361
365
362 def _writebranchcache(self, branches, tip, tiprev):
366 def _writebranchcache(self, branches, tip, tiprev):
363 try:
367 try:
364 f = self.opener("branches.cache", "w")
368 f = self.opener("branches.cache", "w")
365 f.write("%s %s\n" % (hex(tip), tiprev))
369 f.write("%s %s\n" % (hex(tip), tiprev))
366 for label, node in branches.iteritems():
370 for label, node in branches.iteritems():
367 f.write("%s %s\n" % (hex(node), label))
371 f.write("%s %s\n" % (hex(node), label))
368 except IOError:
372 except IOError:
369 pass
373 pass
370
374
371 def _updatebranchcache(self, partial, start, end):
375 def _updatebranchcache(self, partial, start, end):
372 for r in xrange(start, end):
376 for r in xrange(start, end):
373 c = self.changectx(r)
377 c = self.changectx(r)
374 b = c.branch()
378 b = c.branch()
375 if b:
379 if b:
376 partial[b] = c.node()
380 partial[b] = c.node()
377
381
378 def lookup(self, key):
382 def lookup(self, key):
379 if key == '.':
383 if key == '.':
380 key = self.dirstate.parents()[0]
384 key = self.dirstate.parents()[0]
381 if key == nullid:
385 if key == nullid:
382 raise repo.RepoError(_("no revision checked out"))
386 raise repo.RepoError(_("no revision checked out"))
383 elif key == 'null':
387 elif key == 'null':
384 return nullid
388 return nullid
385 n = self.changelog._match(key)
389 n = self.changelog._match(key)
386 if n:
390 if n:
387 return n
391 return n
388 if key in self.tags():
392 if key in self.tags():
389 return self.tags()[key]
393 return self.tags()[key]
390 if key in self.branchtags():
394 if key in self.branchtags():
391 return self.branchtags()[key]
395 return self.branchtags()[key]
392 n = self.changelog._partialmatch(key)
396 n = self.changelog._partialmatch(key)
393 if n:
397 if n:
394 return n
398 return n
395 raise repo.RepoError(_("unknown revision '%s'") % key)
399 raise repo.RepoError(_("unknown revision '%s'") % key)
396
400
397 def dev(self):
401 def dev(self):
398 return os.lstat(self.path).st_dev
402 return os.lstat(self.path).st_dev
399
403
400 def local(self):
404 def local(self):
401 return True
405 return True
402
406
403 def join(self, f):
407 def join(self, f):
404 return os.path.join(self.path, f)
408 return os.path.join(self.path, f)
405
409
406 def sjoin(self, f):
410 def sjoin(self, f):
407 return os.path.join(self.spath, f)
411 return os.path.join(self.spath, f)
408
412
409 def wjoin(self, f):
413 def wjoin(self, f):
410 return os.path.join(self.root, f)
414 return os.path.join(self.root, f)
411
415
412 def file(self, f):
416 def file(self, f):
413 if f[0] == '/':
417 if f[0] == '/':
414 f = f[1:]
418 f = f[1:]
415 return filelog.filelog(self.sopener, f, self.revlogversion)
419 return filelog.filelog(self.sopener, f, self.revlogversion)
416
420
417 def changectx(self, changeid=None):
421 def changectx(self, changeid=None):
418 return context.changectx(self, changeid)
422 return context.changectx(self, changeid)
419
423
420 def workingctx(self):
424 def workingctx(self):
421 return context.workingctx(self)
425 return context.workingctx(self)
422
426
423 def parents(self, changeid=None):
427 def parents(self, changeid=None):
424 '''
428 '''
425 get list of changectxs for parents of changeid or working directory
429 get list of changectxs for parents of changeid or working directory
426 '''
430 '''
427 if changeid is None:
431 if changeid is None:
428 pl = self.dirstate.parents()
432 pl = self.dirstate.parents()
429 else:
433 else:
430 n = self.changelog.lookup(changeid)
434 n = self.changelog.lookup(changeid)
431 pl = self.changelog.parents(n)
435 pl = self.changelog.parents(n)
432 if pl[1] == nullid:
436 if pl[1] == nullid:
433 return [self.changectx(pl[0])]
437 return [self.changectx(pl[0])]
434 return [self.changectx(pl[0]), self.changectx(pl[1])]
438 return [self.changectx(pl[0]), self.changectx(pl[1])]
435
439
436 def filectx(self, path, changeid=None, fileid=None):
440 def filectx(self, path, changeid=None, fileid=None):
437 """changeid can be a changeset revision, node, or tag.
441 """changeid can be a changeset revision, node, or tag.
438 fileid can be a file revision or node."""
442 fileid can be a file revision or node."""
439 return context.filectx(self, path, changeid, fileid)
443 return context.filectx(self, path, changeid, fileid)
440
444
441 def getcwd(self):
445 def getcwd(self):
442 return self.dirstate.getcwd()
446 return self.dirstate.getcwd()
443
447
444 def wfile(self, f, mode='r'):
448 def wfile(self, f, mode='r'):
445 return self.wopener(f, mode)
449 return self.wopener(f, mode)
446
450
447 def wread(self, filename):
451 def wread(self, filename):
448 if self.encodepats == None:
452 if self.encodepats == None:
449 l = []
453 l = []
450 for pat, cmd in self.ui.configitems("encode"):
454 for pat, cmd in self.ui.configitems("encode"):
451 mf = util.matcher(self.root, "", [pat], [], [])[1]
455 mf = util.matcher(self.root, "", [pat], [], [])[1]
452 l.append((mf, cmd))
456 l.append((mf, cmd))
453 self.encodepats = l
457 self.encodepats = l
454
458
455 data = self.wopener(filename, 'r').read()
459 data = self.wopener(filename, 'r').read()
456
460
457 for mf, cmd in self.encodepats:
461 for mf, cmd in self.encodepats:
458 if mf(filename):
462 if mf(filename):
459 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
463 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
460 data = util.filter(data, cmd)
464 data = util.filter(data, cmd)
461 break
465 break
462
466
463 return data
467 return data
464
468
465 def wwrite(self, filename, data, fd=None):
469 def wwrite(self, filename, data, fd=None):
466 if self.decodepats == None:
470 if self.decodepats == None:
467 l = []
471 l = []
468 for pat, cmd in self.ui.configitems("decode"):
472 for pat, cmd in self.ui.configitems("decode"):
469 mf = util.matcher(self.root, "", [pat], [], [])[1]
473 mf = util.matcher(self.root, "", [pat], [], [])[1]
470 l.append((mf, cmd))
474 l.append((mf, cmd))
471 self.decodepats = l
475 self.decodepats = l
472
476
473 for mf, cmd in self.decodepats:
477 for mf, cmd in self.decodepats:
474 if mf(filename):
478 if mf(filename):
475 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
479 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
476 data = util.filter(data, cmd)
480 data = util.filter(data, cmd)
477 break
481 break
478
482
479 if fd:
483 if fd:
480 return fd.write(data)
484 return fd.write(data)
481 return self.wopener(filename, 'w').write(data)
485 return self.wopener(filename, 'w').write(data)
482
486
483 def transaction(self):
487 def transaction(self):
484 tr = self.transhandle
488 tr = self.transhandle
485 if tr != None and tr.running():
489 if tr != None and tr.running():
486 return tr.nest()
490 return tr.nest()
487
491
488 # save dirstate for rollback
492 # save dirstate for rollback
489 try:
493 try:
490 ds = self.opener("dirstate").read()
494 ds = self.opener("dirstate").read()
491 except IOError:
495 except IOError:
492 ds = ""
496 ds = ""
493 self.opener("journal.dirstate", "w").write(ds)
497 self.opener("journal.dirstate", "w").write(ds)
494
498
495 renames = [(self.sjoin("journal"), self.sjoin("undo")),
499 renames = [(self.sjoin("journal"), self.sjoin("undo")),
496 (self.join("journal.dirstate"), self.join("undo.dirstate"))]
500 (self.join("journal.dirstate"), self.join("undo.dirstate"))]
497 tr = transaction.transaction(self.ui.warn, self.sopener,
501 tr = transaction.transaction(self.ui.warn, self.sopener,
498 self.sjoin("journal"),
502 self.sjoin("journal"),
499 aftertrans(renames))
503 aftertrans(renames))
500 self.transhandle = tr
504 self.transhandle = tr
501 return tr
505 return tr
502
506
503 def recover(self):
507 def recover(self):
504 l = self.lock()
508 l = self.lock()
505 if os.path.exists(self.sjoin("journal")):
509 if os.path.exists(self.sjoin("journal")):
506 self.ui.status(_("rolling back interrupted transaction\n"))
510 self.ui.status(_("rolling back interrupted transaction\n"))
507 transaction.rollback(self.sopener, self.sjoin("journal"))
511 transaction.rollback(self.sopener, self.sjoin("journal"))
508 self.reload()
512 self.reload()
509 return True
513 return True
510 else:
514 else:
511 self.ui.warn(_("no interrupted transaction available\n"))
515 self.ui.warn(_("no interrupted transaction available\n"))
512 return False
516 return False
513
517
514 def rollback(self, wlock=None):
518 def rollback(self, wlock=None):
515 if not wlock:
519 if not wlock:
516 wlock = self.wlock()
520 wlock = self.wlock()
517 l = self.lock()
521 l = self.lock()
518 if os.path.exists(self.sjoin("undo")):
522 if os.path.exists(self.sjoin("undo")):
519 self.ui.status(_("rolling back last transaction\n"))
523 self.ui.status(_("rolling back last transaction\n"))
520 transaction.rollback(self.sopener, self.sjoin("undo"))
524 transaction.rollback(self.sopener, self.sjoin("undo"))
521 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
525 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
522 self.reload()
526 self.reload()
523 self.wreload()
527 self.wreload()
524 else:
528 else:
525 self.ui.warn(_("no rollback information available\n"))
529 self.ui.warn(_("no rollback information available\n"))
526
530
527 def wreload(self):
531 def wreload(self):
528 self.dirstate.read()
532 self.dirstate.read()
529
533
530 def reload(self):
534 def reload(self):
531 self.changelog.load()
535 self.changelog.load()
532 self.manifest.load()
536 self.manifest.load()
533 self.tagscache = None
537 self.tagscache = None
534 self.nodetagscache = None
538 self.nodetagscache = None
535
539
536 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
540 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
537 desc=None):
541 desc=None):
538 try:
542 try:
539 l = lock.lock(lockname, 0, releasefn, desc=desc)
543 l = lock.lock(lockname, 0, releasefn, desc=desc)
540 except lock.LockHeld, inst:
544 except lock.LockHeld, inst:
541 if not wait:
545 if not wait:
542 raise
546 raise
543 self.ui.warn(_("waiting for lock on %s held by %r\n") %
547 self.ui.warn(_("waiting for lock on %s held by %r\n") %
544 (desc, inst.locker))
548 (desc, inst.locker))
545 # default to 600 seconds timeout
549 # default to 600 seconds timeout
546 l = lock.lock(lockname, int(self.ui.config("ui", "timeout", "600")),
550 l = lock.lock(lockname, int(self.ui.config("ui", "timeout", "600")),
547 releasefn, desc=desc)
551 releasefn, desc=desc)
548 if acquirefn:
552 if acquirefn:
549 acquirefn()
553 acquirefn()
550 return l
554 return l
551
555
552 def lock(self, wait=1):
556 def lock(self, wait=1):
553 return self.do_lock(self.sjoin("lock"), wait, acquirefn=self.reload,
557 return self.do_lock(self.sjoin("lock"), wait, acquirefn=self.reload,
554 desc=_('repository %s') % self.origroot)
558 desc=_('repository %s') % self.origroot)
555
559
556 def wlock(self, wait=1):
560 def wlock(self, wait=1):
557 return self.do_lock(self.join("wlock"), wait, self.dirstate.write,
561 return self.do_lock(self.join("wlock"), wait, self.dirstate.write,
558 self.wreload,
562 self.wreload,
559 desc=_('working directory of %s') % self.origroot)
563 desc=_('working directory of %s') % self.origroot)
560
564
561 def filecommit(self, fn, manifest1, manifest2, linkrev, transaction, changelist):
565 def filecommit(self, fn, manifest1, manifest2, linkrev, transaction, changelist):
562 """
566 """
563 commit an individual file as part of a larger transaction
567 commit an individual file as part of a larger transaction
564 """
568 """
565
569
566 t = self.wread(fn)
570 t = self.wread(fn)
567 fl = self.file(fn)
571 fl = self.file(fn)
568 fp1 = manifest1.get(fn, nullid)
572 fp1 = manifest1.get(fn, nullid)
569 fp2 = manifest2.get(fn, nullid)
573 fp2 = manifest2.get(fn, nullid)
570
574
571 meta = {}
575 meta = {}
572 cp = self.dirstate.copied(fn)
576 cp = self.dirstate.copied(fn)
573 if cp:
577 if cp:
574 meta["copy"] = cp
578 meta["copy"] = cp
575 if not manifest2: # not a branch merge
579 if not manifest2: # not a branch merge
576 meta["copyrev"] = hex(manifest1.get(cp, nullid))
580 meta["copyrev"] = hex(manifest1.get(cp, nullid))
577 fp2 = nullid
581 fp2 = nullid
578 elif fp2 != nullid: # copied on remote side
582 elif fp2 != nullid: # copied on remote side
579 meta["copyrev"] = hex(manifest1.get(cp, nullid))
583 meta["copyrev"] = hex(manifest1.get(cp, nullid))
580 elif fp1 != nullid: # copied on local side, reversed
584 elif fp1 != nullid: # copied on local side, reversed
581 meta["copyrev"] = hex(manifest2.get(cp))
585 meta["copyrev"] = hex(manifest2.get(cp))
582 fp2 = nullid
586 fp2 = nullid
583 else: # directory rename
587 else: # directory rename
584 meta["copyrev"] = hex(manifest1.get(cp, nullid))
588 meta["copyrev"] = hex(manifest1.get(cp, nullid))
585 self.ui.debug(_(" %s: copy %s:%s\n") %
589 self.ui.debug(_(" %s: copy %s:%s\n") %
586 (fn, cp, meta["copyrev"]))
590 (fn, cp, meta["copyrev"]))
587 fp1 = nullid
591 fp1 = nullid
588 elif fp2 != nullid:
592 elif fp2 != nullid:
589 # is one parent an ancestor of the other?
593 # is one parent an ancestor of the other?
590 fpa = fl.ancestor(fp1, fp2)
594 fpa = fl.ancestor(fp1, fp2)
591 if fpa == fp1:
595 if fpa == fp1:
592 fp1, fp2 = fp2, nullid
596 fp1, fp2 = fp2, nullid
593 elif fpa == fp2:
597 elif fpa == fp2:
594 fp2 = nullid
598 fp2 = nullid
595
599
596 # is the file unmodified from the parent? report existing entry
600 # is the file unmodified from the parent? report existing entry
597 if fp2 == nullid and not fl.cmp(fp1, t):
601 if fp2 == nullid and not fl.cmp(fp1, t):
598 return fp1
602 return fp1
599
603
600 changelist.append(fn)
604 changelist.append(fn)
601 return fl.add(t, meta, transaction, linkrev, fp1, fp2)
605 return fl.add(t, meta, transaction, linkrev, fp1, fp2)
602
606
603 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
607 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
604 if p1 is None:
608 if p1 is None:
605 p1, p2 = self.dirstate.parents()
609 p1, p2 = self.dirstate.parents()
606 return self.commit(files=files, text=text, user=user, date=date,
610 return self.commit(files=files, text=text, user=user, date=date,
607 p1=p1, p2=p2, wlock=wlock)
611 p1=p1, p2=p2, wlock=wlock)
608
612
609 def commit(self, files=None, text="", user=None, date=None,
613 def commit(self, files=None, text="", user=None, date=None,
610 match=util.always, force=False, lock=None, wlock=None,
614 match=util.always, force=False, lock=None, wlock=None,
611 force_editor=False, p1=None, p2=None, extra={}):
615 force_editor=False, p1=None, p2=None, extra={}):
612
616
613 commit = []
617 commit = []
614 remove = []
618 remove = []
615 changed = []
619 changed = []
616 use_dirstate = (p1 is None) # not rawcommit
620 use_dirstate = (p1 is None) # not rawcommit
617 extra = extra.copy()
621 extra = extra.copy()
618
622
619 if use_dirstate:
623 if use_dirstate:
620 if files:
624 if files:
621 for f in files:
625 for f in files:
622 s = self.dirstate.state(f)
626 s = self.dirstate.state(f)
623 if s in 'nmai':
627 if s in 'nmai':
624 commit.append(f)
628 commit.append(f)
625 elif s == 'r':
629 elif s == 'r':
626 remove.append(f)
630 remove.append(f)
627 else:
631 else:
628 self.ui.warn(_("%s not tracked!\n") % f)
632 self.ui.warn(_("%s not tracked!\n") % f)
629 else:
633 else:
630 changes = self.status(match=match)[:5]
634 changes = self.status(match=match)[:5]
631 modified, added, removed, deleted, unknown = changes
635 modified, added, removed, deleted, unknown = changes
632 commit = modified + added
636 commit = modified + added
633 remove = removed
637 remove = removed
634 else:
638 else:
635 commit = files
639 commit = files
636
640
637 if use_dirstate:
641 if use_dirstate:
638 p1, p2 = self.dirstate.parents()
642 p1, p2 = self.dirstate.parents()
639 update_dirstate = True
643 update_dirstate = True
640 else:
644 else:
641 p1, p2 = p1, p2 or nullid
645 p1, p2 = p1, p2 or nullid
642 update_dirstate = (self.dirstate.parents()[0] == p1)
646 update_dirstate = (self.dirstate.parents()[0] == p1)
643
647
644 c1 = self.changelog.read(p1)
648 c1 = self.changelog.read(p1)
645 c2 = self.changelog.read(p2)
649 c2 = self.changelog.read(p2)
646 m1 = self.manifest.read(c1[0]).copy()
650 m1 = self.manifest.read(c1[0]).copy()
647 m2 = self.manifest.read(c2[0])
651 m2 = self.manifest.read(c2[0])
648
652
649 if use_dirstate:
653 if use_dirstate:
650 branchname = util.fromlocal(self.workingctx().branch())
654 branchname = util.fromlocal(self.workingctx().branch())
651 else:
655 else:
652 branchname = ""
656 branchname = ""
653
657
654 if use_dirstate:
658 if use_dirstate:
655 oldname = c1[5].get("branch", "") # stored in UTF-8
659 oldname = c1[5].get("branch", "") # stored in UTF-8
656 if not commit and not remove and not force and p2 == nullid and \
660 if not commit and not remove and not force and p2 == nullid and \
657 branchname == oldname:
661 branchname == oldname:
658 self.ui.status(_("nothing changed\n"))
662 self.ui.status(_("nothing changed\n"))
659 return None
663 return None
660
664
661 xp1 = hex(p1)
665 xp1 = hex(p1)
662 if p2 == nullid: xp2 = ''
666 if p2 == nullid: xp2 = ''
663 else: xp2 = hex(p2)
667 else: xp2 = hex(p2)
664
668
665 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
669 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
666
670
667 if not wlock:
671 if not wlock:
668 wlock = self.wlock()
672 wlock = self.wlock()
669 if not lock:
673 if not lock:
670 lock = self.lock()
674 lock = self.lock()
671 tr = self.transaction()
675 tr = self.transaction()
672
676
673 # check in files
677 # check in files
674 new = {}
678 new = {}
675 linkrev = self.changelog.count()
679 linkrev = self.changelog.count()
676 commit.sort()
680 commit.sort()
677 for f in commit:
681 for f in commit:
678 self.ui.note(f + "\n")
682 self.ui.note(f + "\n")
679 try:
683 try:
680 new[f] = self.filecommit(f, m1, m2, linkrev, tr, changed)
684 new[f] = self.filecommit(f, m1, m2, linkrev, tr, changed)
681 m1.set(f, util.is_exec(self.wjoin(f), m1.execf(f)))
685 m1.set(f, util.is_exec(self.wjoin(f), m1.execf(f)))
682 except IOError:
686 except IOError:
683 if use_dirstate:
687 if use_dirstate:
684 self.ui.warn(_("trouble committing %s!\n") % f)
688 self.ui.warn(_("trouble committing %s!\n") % f)
685 raise
689 raise
686 else:
690 else:
687 remove.append(f)
691 remove.append(f)
688
692
689 # update manifest
693 # update manifest
690 m1.update(new)
694 m1.update(new)
691 remove.sort()
695 remove.sort()
692
696
693 for f in remove:
697 for f in remove:
694 if f in m1:
698 if f in m1:
695 del m1[f]
699 del m1[f]
696 mn = self.manifest.add(m1, tr, linkrev, c1[0], c2[0], (new, remove))
700 mn = self.manifest.add(m1, tr, linkrev, c1[0], c2[0], (new, remove))
697
701
698 # add changeset
702 # add changeset
699 new = new.keys()
703 new = new.keys()
700 new.sort()
704 new.sort()
701
705
702 user = user or self.ui.username()
706 user = user or self.ui.username()
703 if not text or force_editor:
707 if not text or force_editor:
704 edittext = []
708 edittext = []
705 if text:
709 if text:
706 edittext.append(text)
710 edittext.append(text)
707 edittext.append("")
711 edittext.append("")
708 edittext.append("HG: user: %s" % user)
712 edittext.append("HG: user: %s" % user)
709 if p2 != nullid:
713 if p2 != nullid:
710 edittext.append("HG: branch merge")
714 edittext.append("HG: branch merge")
711 edittext.extend(["HG: changed %s" % f for f in changed])
715 edittext.extend(["HG: changed %s" % f for f in changed])
712 edittext.extend(["HG: removed %s" % f for f in remove])
716 edittext.extend(["HG: removed %s" % f for f in remove])
713 if not changed and not remove:
717 if not changed and not remove:
714 edittext.append("HG: no files changed")
718 edittext.append("HG: no files changed")
715 edittext.append("")
719 edittext.append("")
716 # run editor in the repository root
720 # run editor in the repository root
717 olddir = os.getcwd()
721 olddir = os.getcwd()
718 os.chdir(self.root)
722 os.chdir(self.root)
719 text = self.ui.edit("\n".join(edittext), user)
723 text = self.ui.edit("\n".join(edittext), user)
720 os.chdir(olddir)
724 os.chdir(olddir)
721
725
722 lines = [line.rstrip() for line in text.rstrip().splitlines()]
726 lines = [line.rstrip() for line in text.rstrip().splitlines()]
723 while lines and not lines[0]:
727 while lines and not lines[0]:
724 del lines[0]
728 del lines[0]
725 if not lines:
729 if not lines:
726 return None
730 return None
727 text = '\n'.join(lines)
731 text = '\n'.join(lines)
728 if branchname:
732 if branchname:
729 extra["branch"] = branchname
733 extra["branch"] = branchname
730 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2,
734 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2,
731 user, date, extra)
735 user, date, extra)
732 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
736 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
733 parent2=xp2)
737 parent2=xp2)
734 tr.close()
738 tr.close()
735
739
736 if use_dirstate or update_dirstate:
740 if use_dirstate or update_dirstate:
737 self.dirstate.setparents(n)
741 self.dirstate.setparents(n)
738 if use_dirstate:
742 if use_dirstate:
739 self.dirstate.update(new, "n")
743 self.dirstate.update(new, "n")
740 self.dirstate.forget(remove)
744 self.dirstate.forget(remove)
741
745
742 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
746 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
743 return n
747 return n
744
748
745 def walk(self, node=None, files=[], match=util.always, badmatch=None):
749 def walk(self, node=None, files=[], match=util.always, badmatch=None):
746 '''
750 '''
747 walk recursively through the directory tree or a given
751 walk recursively through the directory tree or a given
748 changeset, finding all files matched by the match
752 changeset, finding all files matched by the match
749 function
753 function
750
754
751 results are yielded in a tuple (src, filename), where src
755 results are yielded in a tuple (src, filename), where src
752 is one of:
756 is one of:
753 'f' the file was found in the directory tree
757 'f' the file was found in the directory tree
754 'm' the file was only in the dirstate and not in the tree
758 'm' the file was only in the dirstate and not in the tree
755 'b' file was not found and matched badmatch
759 'b' file was not found and matched badmatch
756 '''
760 '''
757
761
758 if node:
762 if node:
759 fdict = dict.fromkeys(files)
763 fdict = dict.fromkeys(files)
760 for fn in self.manifest.read(self.changelog.read(node)[0]):
764 for fn in self.manifest.read(self.changelog.read(node)[0]):
761 for ffn in fdict:
765 for ffn in fdict:
762 # match if the file is the exact name or a directory
766 # match if the file is the exact name or a directory
763 if ffn == fn or fn.startswith("%s/" % ffn):
767 if ffn == fn or fn.startswith("%s/" % ffn):
764 del fdict[ffn]
768 del fdict[ffn]
765 break
769 break
766 if match(fn):
770 if match(fn):
767 yield 'm', fn
771 yield 'm', fn
768 for fn in fdict:
772 for fn in fdict:
769 if badmatch and badmatch(fn):
773 if badmatch and badmatch(fn):
770 if match(fn):
774 if match(fn):
771 yield 'b', fn
775 yield 'b', fn
772 else:
776 else:
773 self.ui.warn(_('%s: No such file in rev %s\n') % (
777 self.ui.warn(_('%s: No such file in rev %s\n') % (
774 util.pathto(self.getcwd(), fn), short(node)))
778 util.pathto(self.getcwd(), fn), short(node)))
775 else:
779 else:
776 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
780 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
777 yield src, fn
781 yield src, fn
778
782
779 def status(self, node1=None, node2=None, files=[], match=util.always,
783 def status(self, node1=None, node2=None, files=[], match=util.always,
780 wlock=None, list_ignored=False, list_clean=False):
784 wlock=None, list_ignored=False, list_clean=False):
781 """return status of files between two nodes or node and working directory
785 """return status of files between two nodes or node and working directory
782
786
783 If node1 is None, use the first dirstate parent instead.
787 If node1 is None, use the first dirstate parent instead.
784 If node2 is None, compare node1 with working directory.
788 If node2 is None, compare node1 with working directory.
785 """
789 """
786
790
787 def fcmp(fn, mf):
791 def fcmp(fn, mf):
788 t1 = self.wread(fn)
792 t1 = self.wread(fn)
789 return self.file(fn).cmp(mf.get(fn, nullid), t1)
793 return self.file(fn).cmp(mf.get(fn, nullid), t1)
790
794
791 def mfmatches(node):
795 def mfmatches(node):
792 change = self.changelog.read(node)
796 change = self.changelog.read(node)
793 mf = self.manifest.read(change[0]).copy()
797 mf = self.manifest.read(change[0]).copy()
794 for fn in mf.keys():
798 for fn in mf.keys():
795 if not match(fn):
799 if not match(fn):
796 del mf[fn]
800 del mf[fn]
797 return mf
801 return mf
798
802
799 modified, added, removed, deleted, unknown = [], [], [], [], []
803 modified, added, removed, deleted, unknown = [], [], [], [], []
800 ignored, clean = [], []
804 ignored, clean = [], []
801
805
802 compareworking = False
806 compareworking = False
803 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
807 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
804 compareworking = True
808 compareworking = True
805
809
806 if not compareworking:
810 if not compareworking:
807 # read the manifest from node1 before the manifest from node2,
811 # read the manifest from node1 before the manifest from node2,
808 # so that we'll hit the manifest cache if we're going through
812 # so that we'll hit the manifest cache if we're going through
809 # all the revisions in parent->child order.
813 # all the revisions in parent->child order.
810 mf1 = mfmatches(node1)
814 mf1 = mfmatches(node1)
811
815
812 # are we comparing the working directory?
816 # are we comparing the working directory?
813 if not node2:
817 if not node2:
814 if not wlock:
818 if not wlock:
815 try:
819 try:
816 wlock = self.wlock(wait=0)
820 wlock = self.wlock(wait=0)
817 except lock.LockException:
821 except lock.LockException:
818 wlock = None
822 wlock = None
819 (lookup, modified, added, removed, deleted, unknown,
823 (lookup, modified, added, removed, deleted, unknown,
820 ignored, clean) = self.dirstate.status(files, match,
824 ignored, clean) = self.dirstate.status(files, match,
821 list_ignored, list_clean)
825 list_ignored, list_clean)
822
826
823 # are we comparing working dir against its parent?
827 # are we comparing working dir against its parent?
824 if compareworking:
828 if compareworking:
825 if lookup:
829 if lookup:
826 # do a full compare of any files that might have changed
830 # do a full compare of any files that might have changed
827 mf2 = mfmatches(self.dirstate.parents()[0])
831 mf2 = mfmatches(self.dirstate.parents()[0])
828 for f in lookup:
832 for f in lookup:
829 if fcmp(f, mf2):
833 if fcmp(f, mf2):
830 modified.append(f)
834 modified.append(f)
831 else:
835 else:
832 clean.append(f)
836 clean.append(f)
833 if wlock is not None:
837 if wlock is not None:
834 self.dirstate.update([f], "n")
838 self.dirstate.update([f], "n")
835 else:
839 else:
836 # we are comparing working dir against non-parent
840 # we are comparing working dir against non-parent
837 # generate a pseudo-manifest for the working dir
841 # generate a pseudo-manifest for the working dir
838 # XXX: create it in dirstate.py ?
842 # XXX: create it in dirstate.py ?
839 mf2 = mfmatches(self.dirstate.parents()[0])
843 mf2 = mfmatches(self.dirstate.parents()[0])
840 for f in lookup + modified + added:
844 for f in lookup + modified + added:
841 mf2[f] = ""
845 mf2[f] = ""
842 mf2.set(f, execf=util.is_exec(self.wjoin(f), mf2.execf(f)))
846 mf2.set(f, execf=util.is_exec(self.wjoin(f), mf2.execf(f)))
843 for f in removed:
847 for f in removed:
844 if f in mf2:
848 if f in mf2:
845 del mf2[f]
849 del mf2[f]
846 else:
850 else:
847 # we are comparing two revisions
851 # we are comparing two revisions
848 mf2 = mfmatches(node2)
852 mf2 = mfmatches(node2)
849
853
850 if not compareworking:
854 if not compareworking:
851 # flush lists from dirstate before comparing manifests
855 # flush lists from dirstate before comparing manifests
852 modified, added, clean = [], [], []
856 modified, added, clean = [], [], []
853
857
854 # make sure to sort the files so we talk to the disk in a
858 # make sure to sort the files so we talk to the disk in a
855 # reasonable order
859 # reasonable order
856 mf2keys = mf2.keys()
860 mf2keys = mf2.keys()
857 mf2keys.sort()
861 mf2keys.sort()
858 for fn in mf2keys:
862 for fn in mf2keys:
859 if mf1.has_key(fn):
863 if mf1.has_key(fn):
860 if mf1.flags(fn) != mf2.flags(fn) or \
864 if mf1.flags(fn) != mf2.flags(fn) or \
861 (mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1))):
865 (mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1))):
862 modified.append(fn)
866 modified.append(fn)
863 elif list_clean:
867 elif list_clean:
864 clean.append(fn)
868 clean.append(fn)
865 del mf1[fn]
869 del mf1[fn]
866 else:
870 else:
867 added.append(fn)
871 added.append(fn)
868
872
869 removed = mf1.keys()
873 removed = mf1.keys()
870
874
871 # sort and return results:
875 # sort and return results:
872 for l in modified, added, removed, deleted, unknown, ignored, clean:
876 for l in modified, added, removed, deleted, unknown, ignored, clean:
873 l.sort()
877 l.sort()
874 return (modified, added, removed, deleted, unknown, ignored, clean)
878 return (modified, added, removed, deleted, unknown, ignored, clean)
875
879
876 def add(self, list, wlock=None):
880 def add(self, list, wlock=None):
877 if not wlock:
881 if not wlock:
878 wlock = self.wlock()
882 wlock = self.wlock()
879 for f in list:
883 for f in list:
880 p = self.wjoin(f)
884 p = self.wjoin(f)
881 if not os.path.exists(p):
885 if not os.path.exists(p):
882 self.ui.warn(_("%s does not exist!\n") % f)
886 self.ui.warn(_("%s does not exist!\n") % f)
883 elif not os.path.isfile(p):
887 elif not os.path.isfile(p):
884 self.ui.warn(_("%s not added: only files supported currently\n")
888 self.ui.warn(_("%s not added: only files supported currently\n")
885 % f)
889 % f)
886 elif self.dirstate.state(f) in 'an':
890 elif self.dirstate.state(f) in 'an':
887 self.ui.warn(_("%s already tracked!\n") % f)
891 self.ui.warn(_("%s already tracked!\n") % f)
888 else:
892 else:
889 self.dirstate.update([f], "a")
893 self.dirstate.update([f], "a")
890
894
891 def forget(self, list, wlock=None):
895 def forget(self, list, wlock=None):
892 if not wlock:
896 if not wlock:
893 wlock = self.wlock()
897 wlock = self.wlock()
894 for f in list:
898 for f in list:
895 if self.dirstate.state(f) not in 'ai':
899 if self.dirstate.state(f) not in 'ai':
896 self.ui.warn(_("%s not added!\n") % f)
900 self.ui.warn(_("%s not added!\n") % f)
897 else:
901 else:
898 self.dirstate.forget([f])
902 self.dirstate.forget([f])
899
903
900 def remove(self, list, unlink=False, wlock=None):
904 def remove(self, list, unlink=False, wlock=None):
901 if unlink:
905 if unlink:
902 for f in list:
906 for f in list:
903 try:
907 try:
904 util.unlink(self.wjoin(f))
908 util.unlink(self.wjoin(f))
905 except OSError, inst:
909 except OSError, inst:
906 if inst.errno != errno.ENOENT:
910 if inst.errno != errno.ENOENT:
907 raise
911 raise
908 if not wlock:
912 if not wlock:
909 wlock = self.wlock()
913 wlock = self.wlock()
910 for f in list:
914 for f in list:
911 p = self.wjoin(f)
915 p = self.wjoin(f)
912 if os.path.exists(p):
916 if os.path.exists(p):
913 self.ui.warn(_("%s still exists!\n") % f)
917 self.ui.warn(_("%s still exists!\n") % f)
914 elif self.dirstate.state(f) == 'a':
918 elif self.dirstate.state(f) == 'a':
915 self.dirstate.forget([f])
919 self.dirstate.forget([f])
916 elif f not in self.dirstate:
920 elif f not in self.dirstate:
917 self.ui.warn(_("%s not tracked!\n") % f)
921 self.ui.warn(_("%s not tracked!\n") % f)
918 else:
922 else:
919 self.dirstate.update([f], "r")
923 self.dirstate.update([f], "r")
920
924
921 def undelete(self, list, wlock=None):
925 def undelete(self, list, wlock=None):
922 p = self.dirstate.parents()[0]
926 p = self.dirstate.parents()[0]
923 mn = self.changelog.read(p)[0]
927 mn = self.changelog.read(p)[0]
924 m = self.manifest.read(mn)
928 m = self.manifest.read(mn)
925 if not wlock:
929 if not wlock:
926 wlock = self.wlock()
930 wlock = self.wlock()
927 for f in list:
931 for f in list:
928 if self.dirstate.state(f) not in "r":
932 if self.dirstate.state(f) not in "r":
929 self.ui.warn("%s not removed!\n" % f)
933 self.ui.warn("%s not removed!\n" % f)
930 else:
934 else:
931 t = self.file(f).read(m[f])
935 t = self.file(f).read(m[f])
932 self.wwrite(f, t)
936 self.wwrite(f, t)
933 util.set_exec(self.wjoin(f), m.execf(f))
937 util.set_exec(self.wjoin(f), m.execf(f))
934 self.dirstate.update([f], "n")
938 self.dirstate.update([f], "n")
935
939
936 def copy(self, source, dest, wlock=None):
940 def copy(self, source, dest, wlock=None):
937 p = self.wjoin(dest)
941 p = self.wjoin(dest)
938 if not os.path.exists(p):
942 if not os.path.exists(p):
939 self.ui.warn(_("%s does not exist!\n") % dest)
943 self.ui.warn(_("%s does not exist!\n") % dest)
940 elif not os.path.isfile(p):
944 elif not os.path.isfile(p):
941 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
945 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
942 else:
946 else:
943 if not wlock:
947 if not wlock:
944 wlock = self.wlock()
948 wlock = self.wlock()
945 if self.dirstate.state(dest) == '?':
949 if self.dirstate.state(dest) == '?':
946 self.dirstate.update([dest], "a")
950 self.dirstate.update([dest], "a")
947 self.dirstate.copy(source, dest)
951 self.dirstate.copy(source, dest)
948
952
949 def heads(self, start=None):
953 def heads(self, start=None):
950 heads = self.changelog.heads(start)
954 heads = self.changelog.heads(start)
951 # sort the output in rev descending order
955 # sort the output in rev descending order
952 heads = [(-self.changelog.rev(h), h) for h in heads]
956 heads = [(-self.changelog.rev(h), h) for h in heads]
953 heads.sort()
957 heads.sort()
954 return [n for (r, n) in heads]
958 return [n for (r, n) in heads]
955
959
956 # branchlookup returns a dict giving a list of branches for
960 # branchlookup returns a dict giving a list of branches for
957 # each head. A branch is defined as the tag of a node or
961 # each head. A branch is defined as the tag of a node or
958 # the branch of the node's parents. If a node has multiple
962 # the branch of the node's parents. If a node has multiple
959 # branch tags, tags are eliminated if they are visible from other
963 # branch tags, tags are eliminated if they are visible from other
960 # branch tags.
964 # branch tags.
961 #
965 #
962 # So, for this graph: a->b->c->d->e
966 # So, for this graph: a->b->c->d->e
963 # \ /
967 # \ /
964 # aa -----/
968 # aa -----/
965 # a has tag 2.6.12
969 # a has tag 2.6.12
966 # d has tag 2.6.13
970 # d has tag 2.6.13
967 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
971 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
968 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
972 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
969 # from the list.
973 # from the list.
970 #
974 #
971 # It is possible that more than one head will have the same branch tag.
975 # It is possible that more than one head will have the same branch tag.
972 # callers need to check the result for multiple heads under the same
976 # callers need to check the result for multiple heads under the same
973 # branch tag if that is a problem for them (ie checkout of a specific
977 # branch tag if that is a problem for them (ie checkout of a specific
974 # branch).
978 # branch).
975 #
979 #
976 # passing in a specific branch will limit the depth of the search
980 # passing in a specific branch will limit the depth of the search
977 # through the parents. It won't limit the branches returned in the
981 # through the parents. It won't limit the branches returned in the
978 # result though.
982 # result though.
979 def branchlookup(self, heads=None, branch=None):
983 def branchlookup(self, heads=None, branch=None):
980 if not heads:
984 if not heads:
981 heads = self.heads()
985 heads = self.heads()
982 headt = [ h for h in heads ]
986 headt = [ h for h in heads ]
983 chlog = self.changelog
987 chlog = self.changelog
984 branches = {}
988 branches = {}
985 merges = []
989 merges = []
986 seenmerge = {}
990 seenmerge = {}
987
991
988 # traverse the tree once for each head, recording in the branches
992 # traverse the tree once for each head, recording in the branches
989 # dict which tags are visible from this head. The branches
993 # dict which tags are visible from this head. The branches
990 # dict also records which tags are visible from each tag
994 # dict also records which tags are visible from each tag
991 # while we traverse.
995 # while we traverse.
992 while headt or merges:
996 while headt or merges:
993 if merges:
997 if merges:
994 n, found = merges.pop()
998 n, found = merges.pop()
995 visit = [n]
999 visit = [n]
996 else:
1000 else:
997 h = headt.pop()
1001 h = headt.pop()
998 visit = [h]
1002 visit = [h]
999 found = [h]
1003 found = [h]
1000 seen = {}
1004 seen = {}
1001 while visit:
1005 while visit:
1002 n = visit.pop()
1006 n = visit.pop()
1003 if n in seen:
1007 if n in seen:
1004 continue
1008 continue
1005 pp = chlog.parents(n)
1009 pp = chlog.parents(n)
1006 tags = self.nodetags(n)
1010 tags = self.nodetags(n)
1007 if tags:
1011 if tags:
1008 for x in tags:
1012 for x in tags:
1009 if x == 'tip':
1013 if x == 'tip':
1010 continue
1014 continue
1011 for f in found:
1015 for f in found:
1012 branches.setdefault(f, {})[n] = 1
1016 branches.setdefault(f, {})[n] = 1
1013 branches.setdefault(n, {})[n] = 1
1017 branches.setdefault(n, {})[n] = 1
1014 break
1018 break
1015 if n not in found:
1019 if n not in found:
1016 found.append(n)
1020 found.append(n)
1017 if branch in tags:
1021 if branch in tags:
1018 continue
1022 continue
1019 seen[n] = 1
1023 seen[n] = 1
1020 if pp[1] != nullid and n not in seenmerge:
1024 if pp[1] != nullid and n not in seenmerge:
1021 merges.append((pp[1], [x for x in found]))
1025 merges.append((pp[1], [x for x in found]))
1022 seenmerge[n] = 1
1026 seenmerge[n] = 1
1023 if pp[0] != nullid:
1027 if pp[0] != nullid:
1024 visit.append(pp[0])
1028 visit.append(pp[0])
1025 # traverse the branches dict, eliminating branch tags from each
1029 # traverse the branches dict, eliminating branch tags from each
1026 # head that are visible from another branch tag for that head.
1030 # head that are visible from another branch tag for that head.
1027 out = {}
1031 out = {}
1028 viscache = {}
1032 viscache = {}
1029 for h in heads:
1033 for h in heads:
1030 def visible(node):
1034 def visible(node):
1031 if node in viscache:
1035 if node in viscache:
1032 return viscache[node]
1036 return viscache[node]
1033 ret = {}
1037 ret = {}
1034 visit = [node]
1038 visit = [node]
1035 while visit:
1039 while visit:
1036 x = visit.pop()
1040 x = visit.pop()
1037 if x in viscache:
1041 if x in viscache:
1038 ret.update(viscache[x])
1042 ret.update(viscache[x])
1039 elif x not in ret:
1043 elif x not in ret:
1040 ret[x] = 1
1044 ret[x] = 1
1041 if x in branches:
1045 if x in branches:
1042 visit[len(visit):] = branches[x].keys()
1046 visit[len(visit):] = branches[x].keys()
1043 viscache[node] = ret
1047 viscache[node] = ret
1044 return ret
1048 return ret
1045 if h not in branches:
1049 if h not in branches:
1046 continue
1050 continue
1047 # O(n^2), but somewhat limited. This only searches the
1051 # O(n^2), but somewhat limited. This only searches the
1048 # tags visible from a specific head, not all the tags in the
1052 # tags visible from a specific head, not all the tags in the
1049 # whole repo.
1053 # whole repo.
1050 for b in branches[h]:
1054 for b in branches[h]:
1051 vis = False
1055 vis = False
1052 for bb in branches[h].keys():
1056 for bb in branches[h].keys():
1053 if b != bb:
1057 if b != bb:
1054 if b in visible(bb):
1058 if b in visible(bb):
1055 vis = True
1059 vis = True
1056 break
1060 break
1057 if not vis:
1061 if not vis:
1058 l = out.setdefault(h, [])
1062 l = out.setdefault(h, [])
1059 l[len(l):] = self.nodetags(b)
1063 l[len(l):] = self.nodetags(b)
1060 return out
1064 return out
1061
1065
1062 def branches(self, nodes):
1066 def branches(self, nodes):
1063 if not nodes:
1067 if not nodes:
1064 nodes = [self.changelog.tip()]
1068 nodes = [self.changelog.tip()]
1065 b = []
1069 b = []
1066 for n in nodes:
1070 for n in nodes:
1067 t = n
1071 t = n
1068 while 1:
1072 while 1:
1069 p = self.changelog.parents(n)
1073 p = self.changelog.parents(n)
1070 if p[1] != nullid or p[0] == nullid:
1074 if p[1] != nullid or p[0] == nullid:
1071 b.append((t, n, p[0], p[1]))
1075 b.append((t, n, p[0], p[1]))
1072 break
1076 break
1073 n = p[0]
1077 n = p[0]
1074 return b
1078 return b
1075
1079
1076 def between(self, pairs):
1080 def between(self, pairs):
1077 r = []
1081 r = []
1078
1082
1079 for top, bottom in pairs:
1083 for top, bottom in pairs:
1080 n, l, i = top, [], 0
1084 n, l, i = top, [], 0
1081 f = 1
1085 f = 1
1082
1086
1083 while n != bottom:
1087 while n != bottom:
1084 p = self.changelog.parents(n)[0]
1088 p = self.changelog.parents(n)[0]
1085 if i == f:
1089 if i == f:
1086 l.append(n)
1090 l.append(n)
1087 f = f * 2
1091 f = f * 2
1088 n = p
1092 n = p
1089 i += 1
1093 i += 1
1090
1094
1091 r.append(l)
1095 r.append(l)
1092
1096
1093 return r
1097 return r
1094
1098
1095 def findincoming(self, remote, base=None, heads=None, force=False):
1099 def findincoming(self, remote, base=None, heads=None, force=False):
1096 """Return list of roots of the subsets of missing nodes from remote
1100 """Return list of roots of the subsets of missing nodes from remote
1097
1101
1098 If base dict is specified, assume that these nodes and their parents
1102 If base dict is specified, assume that these nodes and their parents
1099 exist on the remote side and that no child of a node of base exists
1103 exist on the remote side and that no child of a node of base exists
1100 in both remote and self.
1104 in both remote and self.
1101 Furthermore base will be updated to include the nodes that exists
1105 Furthermore base will be updated to include the nodes that exists
1102 in self and remote but no children exists in self and remote.
1106 in self and remote but no children exists in self and remote.
1103 If a list of heads is specified, return only nodes which are heads
1107 If a list of heads is specified, return only nodes which are heads
1104 or ancestors of these heads.
1108 or ancestors of these heads.
1105
1109
1106 All the ancestors of base are in self and in remote.
1110 All the ancestors of base are in self and in remote.
1107 All the descendants of the list returned are missing in self.
1111 All the descendants of the list returned are missing in self.
1108 (and so we know that the rest of the nodes are missing in remote, see
1112 (and so we know that the rest of the nodes are missing in remote, see
1109 outgoing)
1113 outgoing)
1110 """
1114 """
1111 m = self.changelog.nodemap
1115 m = self.changelog.nodemap
1112 search = []
1116 search = []
1113 fetch = {}
1117 fetch = {}
1114 seen = {}
1118 seen = {}
1115 seenbranch = {}
1119 seenbranch = {}
1116 if base == None:
1120 if base == None:
1117 base = {}
1121 base = {}
1118
1122
1119 if not heads:
1123 if not heads:
1120 heads = remote.heads()
1124 heads = remote.heads()
1121
1125
1122 if self.changelog.tip() == nullid:
1126 if self.changelog.tip() == nullid:
1123 base[nullid] = 1
1127 base[nullid] = 1
1124 if heads != [nullid]:
1128 if heads != [nullid]:
1125 return [nullid]
1129 return [nullid]
1126 return []
1130 return []
1127
1131
1128 # assume we're closer to the tip than the root
1132 # assume we're closer to the tip than the root
1129 # and start by examining the heads
1133 # and start by examining the heads
1130 self.ui.status(_("searching for changes\n"))
1134 self.ui.status(_("searching for changes\n"))
1131
1135
1132 unknown = []
1136 unknown = []
1133 for h in heads:
1137 for h in heads:
1134 if h not in m:
1138 if h not in m:
1135 unknown.append(h)
1139 unknown.append(h)
1136 else:
1140 else:
1137 base[h] = 1
1141 base[h] = 1
1138
1142
1139 if not unknown:
1143 if not unknown:
1140 return []
1144 return []
1141
1145
1142 req = dict.fromkeys(unknown)
1146 req = dict.fromkeys(unknown)
1143 reqcnt = 0
1147 reqcnt = 0
1144
1148
1145 # search through remote branches
1149 # search through remote branches
1146 # a 'branch' here is a linear segment of history, with four parts:
1150 # a 'branch' here is a linear segment of history, with four parts:
1147 # head, root, first parent, second parent
1151 # head, root, first parent, second parent
1148 # (a branch always has two parents (or none) by definition)
1152 # (a branch always has two parents (or none) by definition)
1149 unknown = remote.branches(unknown)
1153 unknown = remote.branches(unknown)
1150 while unknown:
1154 while unknown:
1151 r = []
1155 r = []
1152 while unknown:
1156 while unknown:
1153 n = unknown.pop(0)
1157 n = unknown.pop(0)
1154 if n[0] in seen:
1158 if n[0] in seen:
1155 continue
1159 continue
1156
1160
1157 self.ui.debug(_("examining %s:%s\n")
1161 self.ui.debug(_("examining %s:%s\n")
1158 % (short(n[0]), short(n[1])))
1162 % (short(n[0]), short(n[1])))
1159 if n[0] == nullid: # found the end of the branch
1163 if n[0] == nullid: # found the end of the branch
1160 pass
1164 pass
1161 elif n in seenbranch:
1165 elif n in seenbranch:
1162 self.ui.debug(_("branch already found\n"))
1166 self.ui.debug(_("branch already found\n"))
1163 continue
1167 continue
1164 elif n[1] and n[1] in m: # do we know the base?
1168 elif n[1] and n[1] in m: # do we know the base?
1165 self.ui.debug(_("found incomplete branch %s:%s\n")
1169 self.ui.debug(_("found incomplete branch %s:%s\n")
1166 % (short(n[0]), short(n[1])))
1170 % (short(n[0]), short(n[1])))
1167 search.append(n) # schedule branch range for scanning
1171 search.append(n) # schedule branch range for scanning
1168 seenbranch[n] = 1
1172 seenbranch[n] = 1
1169 else:
1173 else:
1170 if n[1] not in seen and n[1] not in fetch:
1174 if n[1] not in seen and n[1] not in fetch:
1171 if n[2] in m and n[3] in m:
1175 if n[2] in m and n[3] in m:
1172 self.ui.debug(_("found new changeset %s\n") %
1176 self.ui.debug(_("found new changeset %s\n") %
1173 short(n[1]))
1177 short(n[1]))
1174 fetch[n[1]] = 1 # earliest unknown
1178 fetch[n[1]] = 1 # earliest unknown
1175 for p in n[2:4]:
1179 for p in n[2:4]:
1176 if p in m:
1180 if p in m:
1177 base[p] = 1 # latest known
1181 base[p] = 1 # latest known
1178
1182
1179 for p in n[2:4]:
1183 for p in n[2:4]:
1180 if p not in req and p not in m:
1184 if p not in req and p not in m:
1181 r.append(p)
1185 r.append(p)
1182 req[p] = 1
1186 req[p] = 1
1183 seen[n[0]] = 1
1187 seen[n[0]] = 1
1184
1188
1185 if r:
1189 if r:
1186 reqcnt += 1
1190 reqcnt += 1
1187 self.ui.debug(_("request %d: %s\n") %
1191 self.ui.debug(_("request %d: %s\n") %
1188 (reqcnt, " ".join(map(short, r))))
1192 (reqcnt, " ".join(map(short, r))))
1189 for p in xrange(0, len(r), 10):
1193 for p in xrange(0, len(r), 10):
1190 for b in remote.branches(r[p:p+10]):
1194 for b in remote.branches(r[p:p+10]):
1191 self.ui.debug(_("received %s:%s\n") %
1195 self.ui.debug(_("received %s:%s\n") %
1192 (short(b[0]), short(b[1])))
1196 (short(b[0]), short(b[1])))
1193 unknown.append(b)
1197 unknown.append(b)
1194
1198
1195 # do binary search on the branches we found
1199 # do binary search on the branches we found
1196 while search:
1200 while search:
1197 n = search.pop(0)
1201 n = search.pop(0)
1198 reqcnt += 1
1202 reqcnt += 1
1199 l = remote.between([(n[0], n[1])])[0]
1203 l = remote.between([(n[0], n[1])])[0]
1200 l.append(n[1])
1204 l.append(n[1])
1201 p = n[0]
1205 p = n[0]
1202 f = 1
1206 f = 1
1203 for i in l:
1207 for i in l:
1204 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1208 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1205 if i in m:
1209 if i in m:
1206 if f <= 2:
1210 if f <= 2:
1207 self.ui.debug(_("found new branch changeset %s\n") %
1211 self.ui.debug(_("found new branch changeset %s\n") %
1208 short(p))
1212 short(p))
1209 fetch[p] = 1
1213 fetch[p] = 1
1210 base[i] = 1
1214 base[i] = 1
1211 else:
1215 else:
1212 self.ui.debug(_("narrowed branch search to %s:%s\n")
1216 self.ui.debug(_("narrowed branch search to %s:%s\n")
1213 % (short(p), short(i)))
1217 % (short(p), short(i)))
1214 search.append((p, i))
1218 search.append((p, i))
1215 break
1219 break
1216 p, f = i, f * 2
1220 p, f = i, f * 2
1217
1221
1218 # sanity check our fetch list
1222 # sanity check our fetch list
1219 for f in fetch.keys():
1223 for f in fetch.keys():
1220 if f in m:
1224 if f in m:
1221 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1225 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1222
1226
1223 if base.keys() == [nullid]:
1227 if base.keys() == [nullid]:
1224 if force:
1228 if force:
1225 self.ui.warn(_("warning: repository is unrelated\n"))
1229 self.ui.warn(_("warning: repository is unrelated\n"))
1226 else:
1230 else:
1227 raise util.Abort(_("repository is unrelated"))
1231 raise util.Abort(_("repository is unrelated"))
1228
1232
1229 self.ui.debug(_("found new changesets starting at ") +
1233 self.ui.debug(_("found new changesets starting at ") +
1230 " ".join([short(f) for f in fetch]) + "\n")
1234 " ".join([short(f) for f in fetch]) + "\n")
1231
1235
1232 self.ui.debug(_("%d total queries\n") % reqcnt)
1236 self.ui.debug(_("%d total queries\n") % reqcnt)
1233
1237
1234 return fetch.keys()
1238 return fetch.keys()
1235
1239
1236 def findoutgoing(self, remote, base=None, heads=None, force=False):
1240 def findoutgoing(self, remote, base=None, heads=None, force=False):
1237 """Return list of nodes that are roots of subsets not in remote
1241 """Return list of nodes that are roots of subsets not in remote
1238
1242
1239 If base dict is specified, assume that these nodes and their parents
1243 If base dict is specified, assume that these nodes and their parents
1240 exist on the remote side.
1244 exist on the remote side.
1241 If a list of heads is specified, return only nodes which are heads
1245 If a list of heads is specified, return only nodes which are heads
1242 or ancestors of these heads, and return a second element which
1246 or ancestors of these heads, and return a second element which
1243 contains all remote heads which get new children.
1247 contains all remote heads which get new children.
1244 """
1248 """
1245 if base == None:
1249 if base == None:
1246 base = {}
1250 base = {}
1247 self.findincoming(remote, base, heads, force=force)
1251 self.findincoming(remote, base, heads, force=force)
1248
1252
1249 self.ui.debug(_("common changesets up to ")
1253 self.ui.debug(_("common changesets up to ")
1250 + " ".join(map(short, base.keys())) + "\n")
1254 + " ".join(map(short, base.keys())) + "\n")
1251
1255
1252 remain = dict.fromkeys(self.changelog.nodemap)
1256 remain = dict.fromkeys(self.changelog.nodemap)
1253
1257
1254 # prune everything remote has from the tree
1258 # prune everything remote has from the tree
1255 del remain[nullid]
1259 del remain[nullid]
1256 remove = base.keys()
1260 remove = base.keys()
1257 while remove:
1261 while remove:
1258 n = remove.pop(0)
1262 n = remove.pop(0)
1259 if n in remain:
1263 if n in remain:
1260 del remain[n]
1264 del remain[n]
1261 for p in self.changelog.parents(n):
1265 for p in self.changelog.parents(n):
1262 remove.append(p)
1266 remove.append(p)
1263
1267
1264 # find every node whose parents have been pruned
1268 # find every node whose parents have been pruned
1265 subset = []
1269 subset = []
1266 # find every remote head that will get new children
1270 # find every remote head that will get new children
1267 updated_heads = {}
1271 updated_heads = {}
1268 for n in remain:
1272 for n in remain:
1269 p1, p2 = self.changelog.parents(n)
1273 p1, p2 = self.changelog.parents(n)
1270 if p1 not in remain and p2 not in remain:
1274 if p1 not in remain and p2 not in remain:
1271 subset.append(n)
1275 subset.append(n)
1272 if heads:
1276 if heads:
1273 if p1 in heads:
1277 if p1 in heads:
1274 updated_heads[p1] = True
1278 updated_heads[p1] = True
1275 if p2 in heads:
1279 if p2 in heads:
1276 updated_heads[p2] = True
1280 updated_heads[p2] = True
1277
1281
1278 # this is the set of all roots we have to push
1282 # this is the set of all roots we have to push
1279 if heads:
1283 if heads:
1280 return subset, updated_heads.keys()
1284 return subset, updated_heads.keys()
1281 else:
1285 else:
1282 return subset
1286 return subset
1283
1287
1284 def pull(self, remote, heads=None, force=False, lock=None):
1288 def pull(self, remote, heads=None, force=False, lock=None):
1285 mylock = False
1289 mylock = False
1286 if not lock:
1290 if not lock:
1287 lock = self.lock()
1291 lock = self.lock()
1288 mylock = True
1292 mylock = True
1289
1293
1290 try:
1294 try:
1291 fetch = self.findincoming(remote, force=force)
1295 fetch = self.findincoming(remote, force=force)
1292 if fetch == [nullid]:
1296 if fetch == [nullid]:
1293 self.ui.status(_("requesting all changes\n"))
1297 self.ui.status(_("requesting all changes\n"))
1294
1298
1295 if not fetch:
1299 if not fetch:
1296 self.ui.status(_("no changes found\n"))
1300 self.ui.status(_("no changes found\n"))
1297 return 0
1301 return 0
1298
1302
1299 if heads is None:
1303 if heads is None:
1300 cg = remote.changegroup(fetch, 'pull')
1304 cg = remote.changegroup(fetch, 'pull')
1301 else:
1305 else:
1302 if 'changegroupsubset' not in remote.capabilities:
1306 if 'changegroupsubset' not in remote.capabilities:
1303 raise util.Abort(_("Partial pull cannot be done because other repository doesn't support changegroupsubset."))
1307 raise util.Abort(_("Partial pull cannot be done because other repository doesn't support changegroupsubset."))
1304 cg = remote.changegroupsubset(fetch, heads, 'pull')
1308 cg = remote.changegroupsubset(fetch, heads, 'pull')
1305 return self.addchangegroup(cg, 'pull', remote.url())
1309 return self.addchangegroup(cg, 'pull', remote.url())
1306 finally:
1310 finally:
1307 if mylock:
1311 if mylock:
1308 lock.release()
1312 lock.release()
1309
1313
1310 def push(self, remote, force=False, revs=None):
1314 def push(self, remote, force=False, revs=None):
1311 # there are two ways to push to remote repo:
1315 # there are two ways to push to remote repo:
1312 #
1316 #
1313 # addchangegroup assumes local user can lock remote
1317 # addchangegroup assumes local user can lock remote
1314 # repo (local filesystem, old ssh servers).
1318 # repo (local filesystem, old ssh servers).
1315 #
1319 #
1316 # unbundle assumes local user cannot lock remote repo (new ssh
1320 # unbundle assumes local user cannot lock remote repo (new ssh
1317 # servers, http servers).
1321 # servers, http servers).
1318
1322
1319 if remote.capable('unbundle'):
1323 if remote.capable('unbundle'):
1320 return self.push_unbundle(remote, force, revs)
1324 return self.push_unbundle(remote, force, revs)
1321 return self.push_addchangegroup(remote, force, revs)
1325 return self.push_addchangegroup(remote, force, revs)
1322
1326
1323 def prepush(self, remote, force, revs):
1327 def prepush(self, remote, force, revs):
1324 base = {}
1328 base = {}
1325 remote_heads = remote.heads()
1329 remote_heads = remote.heads()
1326 inc = self.findincoming(remote, base, remote_heads, force=force)
1330 inc = self.findincoming(remote, base, remote_heads, force=force)
1327
1331
1328 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1332 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1329 if revs is not None:
1333 if revs is not None:
1330 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1334 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1331 else:
1335 else:
1332 bases, heads = update, self.changelog.heads()
1336 bases, heads = update, self.changelog.heads()
1333
1337
1334 if not bases:
1338 if not bases:
1335 self.ui.status(_("no changes found\n"))
1339 self.ui.status(_("no changes found\n"))
1336 return None, 1
1340 return None, 1
1337 elif not force:
1341 elif not force:
1338 # check if we're creating new remote heads
1342 # check if we're creating new remote heads
1339 # to be a remote head after push, node must be either
1343 # to be a remote head after push, node must be either
1340 # - unknown locally
1344 # - unknown locally
1341 # - a local outgoing head descended from update
1345 # - a local outgoing head descended from update
1342 # - a remote head that's known locally and not
1346 # - a remote head that's known locally and not
1343 # ancestral to an outgoing head
1347 # ancestral to an outgoing head
1344
1348
1345 warn = 0
1349 warn = 0
1346
1350
1347 if remote_heads == [nullid]:
1351 if remote_heads == [nullid]:
1348 warn = 0
1352 warn = 0
1349 elif not revs and len(heads) > len(remote_heads):
1353 elif not revs and len(heads) > len(remote_heads):
1350 warn = 1
1354 warn = 1
1351 else:
1355 else:
1352 newheads = list(heads)
1356 newheads = list(heads)
1353 for r in remote_heads:
1357 for r in remote_heads:
1354 if r in self.changelog.nodemap:
1358 if r in self.changelog.nodemap:
1355 desc = self.changelog.heads(r)
1359 desc = self.changelog.heads(r)
1356 l = [h for h in heads if h in desc]
1360 l = [h for h in heads if h in desc]
1357 if not l:
1361 if not l:
1358 newheads.append(r)
1362 newheads.append(r)
1359 else:
1363 else:
1360 newheads.append(r)
1364 newheads.append(r)
1361 if len(newheads) > len(remote_heads):
1365 if len(newheads) > len(remote_heads):
1362 warn = 1
1366 warn = 1
1363
1367
1364 if warn:
1368 if warn:
1365 self.ui.warn(_("abort: push creates new remote branches!\n"))
1369 self.ui.warn(_("abort: push creates new remote branches!\n"))
1366 self.ui.status(_("(did you forget to merge?"
1370 self.ui.status(_("(did you forget to merge?"
1367 " use push -f to force)\n"))
1371 " use push -f to force)\n"))
1368 return None, 1
1372 return None, 1
1369 elif inc:
1373 elif inc:
1370 self.ui.warn(_("note: unsynced remote changes!\n"))
1374 self.ui.warn(_("note: unsynced remote changes!\n"))
1371
1375
1372
1376
1373 if revs is None:
1377 if revs is None:
1374 cg = self.changegroup(update, 'push')
1378 cg = self.changegroup(update, 'push')
1375 else:
1379 else:
1376 cg = self.changegroupsubset(update, revs, 'push')
1380 cg = self.changegroupsubset(update, revs, 'push')
1377 return cg, remote_heads
1381 return cg, remote_heads
1378
1382
1379 def push_addchangegroup(self, remote, force, revs):
1383 def push_addchangegroup(self, remote, force, revs):
1380 lock = remote.lock()
1384 lock = remote.lock()
1381
1385
1382 ret = self.prepush(remote, force, revs)
1386 ret = self.prepush(remote, force, revs)
1383 if ret[0] is not None:
1387 if ret[0] is not None:
1384 cg, remote_heads = ret
1388 cg, remote_heads = ret
1385 return remote.addchangegroup(cg, 'push', self.url())
1389 return remote.addchangegroup(cg, 'push', self.url())
1386 return ret[1]
1390 return ret[1]
1387
1391
1388 def push_unbundle(self, remote, force, revs):
1392 def push_unbundle(self, remote, force, revs):
1389 # local repo finds heads on server, finds out what revs it
1393 # local repo finds heads on server, finds out what revs it
1390 # must push. once revs transferred, if server finds it has
1394 # must push. once revs transferred, if server finds it has
1391 # different heads (someone else won commit/push race), server
1395 # different heads (someone else won commit/push race), server
1392 # aborts.
1396 # aborts.
1393
1397
1394 ret = self.prepush(remote, force, revs)
1398 ret = self.prepush(remote, force, revs)
1395 if ret[0] is not None:
1399 if ret[0] is not None:
1396 cg, remote_heads = ret
1400 cg, remote_heads = ret
1397 if force: remote_heads = ['force']
1401 if force: remote_heads = ['force']
1398 return remote.unbundle(cg, remote_heads, 'push')
1402 return remote.unbundle(cg, remote_heads, 'push')
1399 return ret[1]
1403 return ret[1]
1400
1404
1401 def changegroupinfo(self, nodes):
1405 def changegroupinfo(self, nodes):
1402 self.ui.note(_("%d changesets found\n") % len(nodes))
1406 self.ui.note(_("%d changesets found\n") % len(nodes))
1403 if self.ui.debugflag:
1407 if self.ui.debugflag:
1404 self.ui.debug(_("List of changesets:\n"))
1408 self.ui.debug(_("List of changesets:\n"))
1405 for node in nodes:
1409 for node in nodes:
1406 self.ui.debug("%s\n" % hex(node))
1410 self.ui.debug("%s\n" % hex(node))
1407
1411
1408 def changegroupsubset(self, bases, heads, source):
1412 def changegroupsubset(self, bases, heads, source):
1409 """This function generates a changegroup consisting of all the nodes
1413 """This function generates a changegroup consisting of all the nodes
1410 that are descendents of any of the bases, and ancestors of any of
1414 that are descendents of any of the bases, and ancestors of any of
1411 the heads.
1415 the heads.
1412
1416
1413 It is fairly complex as determining which filenodes and which
1417 It is fairly complex as determining which filenodes and which
1414 manifest nodes need to be included for the changeset to be complete
1418 manifest nodes need to be included for the changeset to be complete
1415 is non-trivial.
1419 is non-trivial.
1416
1420
1417 Another wrinkle is doing the reverse, figuring out which changeset in
1421 Another wrinkle is doing the reverse, figuring out which changeset in
1418 the changegroup a particular filenode or manifestnode belongs to."""
1422 the changegroup a particular filenode or manifestnode belongs to."""
1419
1423
1420 self.hook('preoutgoing', throw=True, source=source)
1424 self.hook('preoutgoing', throw=True, source=source)
1421
1425
1422 # Set up some initial variables
1426 # Set up some initial variables
1423 # Make it easy to refer to self.changelog
1427 # Make it easy to refer to self.changelog
1424 cl = self.changelog
1428 cl = self.changelog
1425 # msng is short for missing - compute the list of changesets in this
1429 # msng is short for missing - compute the list of changesets in this
1426 # changegroup.
1430 # changegroup.
1427 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1431 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1428 self.changegroupinfo(msng_cl_lst)
1432 self.changegroupinfo(msng_cl_lst)
1429 # Some bases may turn out to be superfluous, and some heads may be
1433 # Some bases may turn out to be superfluous, and some heads may be
1430 # too. nodesbetween will return the minimal set of bases and heads
1434 # too. nodesbetween will return the minimal set of bases and heads
1431 # necessary to re-create the changegroup.
1435 # necessary to re-create the changegroup.
1432
1436
1433 # Known heads are the list of heads that it is assumed the recipient
1437 # Known heads are the list of heads that it is assumed the recipient
1434 # of this changegroup will know about.
1438 # of this changegroup will know about.
1435 knownheads = {}
1439 knownheads = {}
1436 # We assume that all parents of bases are known heads.
1440 # We assume that all parents of bases are known heads.
1437 for n in bases:
1441 for n in bases:
1438 for p in cl.parents(n):
1442 for p in cl.parents(n):
1439 if p != nullid:
1443 if p != nullid:
1440 knownheads[p] = 1
1444 knownheads[p] = 1
1441 knownheads = knownheads.keys()
1445 knownheads = knownheads.keys()
1442 if knownheads:
1446 if knownheads:
1443 # Now that we know what heads are known, we can compute which
1447 # Now that we know what heads are known, we can compute which
1444 # changesets are known. The recipient must know about all
1448 # changesets are known. The recipient must know about all
1445 # changesets required to reach the known heads from the null
1449 # changesets required to reach the known heads from the null
1446 # changeset.
1450 # changeset.
1447 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1451 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1448 junk = None
1452 junk = None
1449 # Transform the list into an ersatz set.
1453 # Transform the list into an ersatz set.
1450 has_cl_set = dict.fromkeys(has_cl_set)
1454 has_cl_set = dict.fromkeys(has_cl_set)
1451 else:
1455 else:
1452 # If there were no known heads, the recipient cannot be assumed to
1456 # If there were no known heads, the recipient cannot be assumed to
1453 # know about any changesets.
1457 # know about any changesets.
1454 has_cl_set = {}
1458 has_cl_set = {}
1455
1459
1456 # Make it easy to refer to self.manifest
1460 # Make it easy to refer to self.manifest
1457 mnfst = self.manifest
1461 mnfst = self.manifest
1458 # We don't know which manifests are missing yet
1462 # We don't know which manifests are missing yet
1459 msng_mnfst_set = {}
1463 msng_mnfst_set = {}
1460 # Nor do we know which filenodes are missing.
1464 # Nor do we know which filenodes are missing.
1461 msng_filenode_set = {}
1465 msng_filenode_set = {}
1462
1466
1463 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1467 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1464 junk = None
1468 junk = None
1465
1469
1466 # A changeset always belongs to itself, so the changenode lookup
1470 # A changeset always belongs to itself, so the changenode lookup
1467 # function for a changenode is identity.
1471 # function for a changenode is identity.
1468 def identity(x):
1472 def identity(x):
1469 return x
1473 return x
1470
1474
1471 # A function generating function. Sets up an environment for the
1475 # A function generating function. Sets up an environment for the
1472 # inner function.
1476 # inner function.
1473 def cmp_by_rev_func(revlog):
1477 def cmp_by_rev_func(revlog):
1474 # Compare two nodes by their revision number in the environment's
1478 # Compare two nodes by their revision number in the environment's
1475 # revision history. Since the revision number both represents the
1479 # revision history. Since the revision number both represents the
1476 # most efficient order to read the nodes in, and represents a
1480 # most efficient order to read the nodes in, and represents a
1477 # topological sorting of the nodes, this function is often useful.
1481 # topological sorting of the nodes, this function is often useful.
1478 def cmp_by_rev(a, b):
1482 def cmp_by_rev(a, b):
1479 return cmp(revlog.rev(a), revlog.rev(b))
1483 return cmp(revlog.rev(a), revlog.rev(b))
1480 return cmp_by_rev
1484 return cmp_by_rev
1481
1485
1482 # If we determine that a particular file or manifest node must be a
1486 # If we determine that a particular file or manifest node must be a
1483 # node that the recipient of the changegroup will already have, we can
1487 # node that the recipient of the changegroup will already have, we can
1484 # also assume the recipient will have all the parents. This function
1488 # also assume the recipient will have all the parents. This function
1485 # prunes them from the set of missing nodes.
1489 # prunes them from the set of missing nodes.
1486 def prune_parents(revlog, hasset, msngset):
1490 def prune_parents(revlog, hasset, msngset):
1487 haslst = hasset.keys()
1491 haslst = hasset.keys()
1488 haslst.sort(cmp_by_rev_func(revlog))
1492 haslst.sort(cmp_by_rev_func(revlog))
1489 for node in haslst:
1493 for node in haslst:
1490 parentlst = [p for p in revlog.parents(node) if p != nullid]
1494 parentlst = [p for p in revlog.parents(node) if p != nullid]
1491 while parentlst:
1495 while parentlst:
1492 n = parentlst.pop()
1496 n = parentlst.pop()
1493 if n not in hasset:
1497 if n not in hasset:
1494 hasset[n] = 1
1498 hasset[n] = 1
1495 p = [p for p in revlog.parents(n) if p != nullid]
1499 p = [p for p in revlog.parents(n) if p != nullid]
1496 parentlst.extend(p)
1500 parentlst.extend(p)
1497 for n in hasset:
1501 for n in hasset:
1498 msngset.pop(n, None)
1502 msngset.pop(n, None)
1499
1503
1500 # This is a function generating function used to set up an environment
1504 # This is a function generating function used to set up an environment
1501 # for the inner function to execute in.
1505 # for the inner function to execute in.
1502 def manifest_and_file_collector(changedfileset):
1506 def manifest_and_file_collector(changedfileset):
1503 # This is an information gathering function that gathers
1507 # This is an information gathering function that gathers
1504 # information from each changeset node that goes out as part of
1508 # information from each changeset node that goes out as part of
1505 # the changegroup. The information gathered is a list of which
1509 # the changegroup. The information gathered is a list of which
1506 # manifest nodes are potentially required (the recipient may
1510 # manifest nodes are potentially required (the recipient may
1507 # already have them) and total list of all files which were
1511 # already have them) and total list of all files which were
1508 # changed in any changeset in the changegroup.
1512 # changed in any changeset in the changegroup.
1509 #
1513 #
1510 # We also remember the first changenode we saw any manifest
1514 # We also remember the first changenode we saw any manifest
1511 # referenced by so we can later determine which changenode 'owns'
1515 # referenced by so we can later determine which changenode 'owns'
1512 # the manifest.
1516 # the manifest.
1513 def collect_manifests_and_files(clnode):
1517 def collect_manifests_and_files(clnode):
1514 c = cl.read(clnode)
1518 c = cl.read(clnode)
1515 for f in c[3]:
1519 for f in c[3]:
1516 # This is to make sure we only have one instance of each
1520 # This is to make sure we only have one instance of each
1517 # filename string for each filename.
1521 # filename string for each filename.
1518 changedfileset.setdefault(f, f)
1522 changedfileset.setdefault(f, f)
1519 msng_mnfst_set.setdefault(c[0], clnode)
1523 msng_mnfst_set.setdefault(c[0], clnode)
1520 return collect_manifests_and_files
1524 return collect_manifests_and_files
1521
1525
1522 # Figure out which manifest nodes (of the ones we think might be part
1526 # Figure out which manifest nodes (of the ones we think might be part
1523 # of the changegroup) the recipient must know about and remove them
1527 # of the changegroup) the recipient must know about and remove them
1524 # from the changegroup.
1528 # from the changegroup.
1525 def prune_manifests():
1529 def prune_manifests():
1526 has_mnfst_set = {}
1530 has_mnfst_set = {}
1527 for n in msng_mnfst_set:
1531 for n in msng_mnfst_set:
1528 # If a 'missing' manifest thinks it belongs to a changenode
1532 # If a 'missing' manifest thinks it belongs to a changenode
1529 # the recipient is assumed to have, obviously the recipient
1533 # the recipient is assumed to have, obviously the recipient
1530 # must have that manifest.
1534 # must have that manifest.
1531 linknode = cl.node(mnfst.linkrev(n))
1535 linknode = cl.node(mnfst.linkrev(n))
1532 if linknode in has_cl_set:
1536 if linknode in has_cl_set:
1533 has_mnfst_set[n] = 1
1537 has_mnfst_set[n] = 1
1534 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1538 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1535
1539
1536 # Use the information collected in collect_manifests_and_files to say
1540 # Use the information collected in collect_manifests_and_files to say
1537 # which changenode any manifestnode belongs to.
1541 # which changenode any manifestnode belongs to.
1538 def lookup_manifest_link(mnfstnode):
1542 def lookup_manifest_link(mnfstnode):
1539 return msng_mnfst_set[mnfstnode]
1543 return msng_mnfst_set[mnfstnode]
1540
1544
1541 # A function generating function that sets up the initial environment
1545 # A function generating function that sets up the initial environment
1542 # the inner function.
1546 # the inner function.
1543 def filenode_collector(changedfiles):
1547 def filenode_collector(changedfiles):
1544 next_rev = [0]
1548 next_rev = [0]
1545 # This gathers information from each manifestnode included in the
1549 # This gathers information from each manifestnode included in the
1546 # changegroup about which filenodes the manifest node references
1550 # changegroup about which filenodes the manifest node references
1547 # so we can include those in the changegroup too.
1551 # so we can include those in the changegroup too.
1548 #
1552 #
1549 # It also remembers which changenode each filenode belongs to. It
1553 # It also remembers which changenode each filenode belongs to. It
1550 # does this by assuming the a filenode belongs to the changenode
1554 # does this by assuming the a filenode belongs to the changenode
1551 # the first manifest that references it belongs to.
1555 # the first manifest that references it belongs to.
1552 def collect_msng_filenodes(mnfstnode):
1556 def collect_msng_filenodes(mnfstnode):
1553 r = mnfst.rev(mnfstnode)
1557 r = mnfst.rev(mnfstnode)
1554 if r == next_rev[0]:
1558 if r == next_rev[0]:
1555 # If the last rev we looked at was the one just previous,
1559 # If the last rev we looked at was the one just previous,
1556 # we only need to see a diff.
1560 # we only need to see a diff.
1557 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1561 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1558 # For each line in the delta
1562 # For each line in the delta
1559 for dline in delta.splitlines():
1563 for dline in delta.splitlines():
1560 # get the filename and filenode for that line
1564 # get the filename and filenode for that line
1561 f, fnode = dline.split('\0')
1565 f, fnode = dline.split('\0')
1562 fnode = bin(fnode[:40])
1566 fnode = bin(fnode[:40])
1563 f = changedfiles.get(f, None)
1567 f = changedfiles.get(f, None)
1564 # And if the file is in the list of files we care
1568 # And if the file is in the list of files we care
1565 # about.
1569 # about.
1566 if f is not None:
1570 if f is not None:
1567 # Get the changenode this manifest belongs to
1571 # Get the changenode this manifest belongs to
1568 clnode = msng_mnfst_set[mnfstnode]
1572 clnode = msng_mnfst_set[mnfstnode]
1569 # Create the set of filenodes for the file if
1573 # Create the set of filenodes for the file if
1570 # there isn't one already.
1574 # there isn't one already.
1571 ndset = msng_filenode_set.setdefault(f, {})
1575 ndset = msng_filenode_set.setdefault(f, {})
1572 # And set the filenode's changelog node to the
1576 # And set the filenode's changelog node to the
1573 # manifest's if it hasn't been set already.
1577 # manifest's if it hasn't been set already.
1574 ndset.setdefault(fnode, clnode)
1578 ndset.setdefault(fnode, clnode)
1575 else:
1579 else:
1576 # Otherwise we need a full manifest.
1580 # Otherwise we need a full manifest.
1577 m = mnfst.read(mnfstnode)
1581 m = mnfst.read(mnfstnode)
1578 # For every file in we care about.
1582 # For every file in we care about.
1579 for f in changedfiles:
1583 for f in changedfiles:
1580 fnode = m.get(f, None)
1584 fnode = m.get(f, None)
1581 # If it's in the manifest
1585 # If it's in the manifest
1582 if fnode is not None:
1586 if fnode is not None:
1583 # See comments above.
1587 # See comments above.
1584 clnode = msng_mnfst_set[mnfstnode]
1588 clnode = msng_mnfst_set[mnfstnode]
1585 ndset = msng_filenode_set.setdefault(f, {})
1589 ndset = msng_filenode_set.setdefault(f, {})
1586 ndset.setdefault(fnode, clnode)
1590 ndset.setdefault(fnode, clnode)
1587 # Remember the revision we hope to see next.
1591 # Remember the revision we hope to see next.
1588 next_rev[0] = r + 1
1592 next_rev[0] = r + 1
1589 return collect_msng_filenodes
1593 return collect_msng_filenodes
1590
1594
1591 # We have a list of filenodes we think we need for a file, lets remove
1595 # We have a list of filenodes we think we need for a file, lets remove
1592 # all those we now the recipient must have.
1596 # all those we now the recipient must have.
1593 def prune_filenodes(f, filerevlog):
1597 def prune_filenodes(f, filerevlog):
1594 msngset = msng_filenode_set[f]
1598 msngset = msng_filenode_set[f]
1595 hasset = {}
1599 hasset = {}
1596 # If a 'missing' filenode thinks it belongs to a changenode we
1600 # If a 'missing' filenode thinks it belongs to a changenode we
1597 # assume the recipient must have, then the recipient must have
1601 # assume the recipient must have, then the recipient must have
1598 # that filenode.
1602 # that filenode.
1599 for n in msngset:
1603 for n in msngset:
1600 clnode = cl.node(filerevlog.linkrev(n))
1604 clnode = cl.node(filerevlog.linkrev(n))
1601 if clnode in has_cl_set:
1605 if clnode in has_cl_set:
1602 hasset[n] = 1
1606 hasset[n] = 1
1603 prune_parents(filerevlog, hasset, msngset)
1607 prune_parents(filerevlog, hasset, msngset)
1604
1608
1605 # A function generator function that sets up the a context for the
1609 # A function generator function that sets up the a context for the
1606 # inner function.
1610 # inner function.
1607 def lookup_filenode_link_func(fname):
1611 def lookup_filenode_link_func(fname):
1608 msngset = msng_filenode_set[fname]
1612 msngset = msng_filenode_set[fname]
1609 # Lookup the changenode the filenode belongs to.
1613 # Lookup the changenode the filenode belongs to.
1610 def lookup_filenode_link(fnode):
1614 def lookup_filenode_link(fnode):
1611 return msngset[fnode]
1615 return msngset[fnode]
1612 return lookup_filenode_link
1616 return lookup_filenode_link
1613
1617
1614 # Now that we have all theses utility functions to help out and
1618 # Now that we have all theses utility functions to help out and
1615 # logically divide up the task, generate the group.
1619 # logically divide up the task, generate the group.
1616 def gengroup():
1620 def gengroup():
1617 # The set of changed files starts empty.
1621 # The set of changed files starts empty.
1618 changedfiles = {}
1622 changedfiles = {}
1619 # Create a changenode group generator that will call our functions
1623 # Create a changenode group generator that will call our functions
1620 # back to lookup the owning changenode and collect information.
1624 # back to lookup the owning changenode and collect information.
1621 group = cl.group(msng_cl_lst, identity,
1625 group = cl.group(msng_cl_lst, identity,
1622 manifest_and_file_collector(changedfiles))
1626 manifest_and_file_collector(changedfiles))
1623 for chnk in group:
1627 for chnk in group:
1624 yield chnk
1628 yield chnk
1625
1629
1626 # The list of manifests has been collected by the generator
1630 # The list of manifests has been collected by the generator
1627 # calling our functions back.
1631 # calling our functions back.
1628 prune_manifests()
1632 prune_manifests()
1629 msng_mnfst_lst = msng_mnfst_set.keys()
1633 msng_mnfst_lst = msng_mnfst_set.keys()
1630 # Sort the manifestnodes by revision number.
1634 # Sort the manifestnodes by revision number.
1631 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1635 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1632 # Create a generator for the manifestnodes that calls our lookup
1636 # Create a generator for the manifestnodes that calls our lookup
1633 # and data collection functions back.
1637 # and data collection functions back.
1634 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1638 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1635 filenode_collector(changedfiles))
1639 filenode_collector(changedfiles))
1636 for chnk in group:
1640 for chnk in group:
1637 yield chnk
1641 yield chnk
1638
1642
1639 # These are no longer needed, dereference and toss the memory for
1643 # These are no longer needed, dereference and toss the memory for
1640 # them.
1644 # them.
1641 msng_mnfst_lst = None
1645 msng_mnfst_lst = None
1642 msng_mnfst_set.clear()
1646 msng_mnfst_set.clear()
1643
1647
1644 changedfiles = changedfiles.keys()
1648 changedfiles = changedfiles.keys()
1645 changedfiles.sort()
1649 changedfiles.sort()
1646 # Go through all our files in order sorted by name.
1650 # Go through all our files in order sorted by name.
1647 for fname in changedfiles:
1651 for fname in changedfiles:
1648 filerevlog = self.file(fname)
1652 filerevlog = self.file(fname)
1649 # Toss out the filenodes that the recipient isn't really
1653 # Toss out the filenodes that the recipient isn't really
1650 # missing.
1654 # missing.
1651 if msng_filenode_set.has_key(fname):
1655 if msng_filenode_set.has_key(fname):
1652 prune_filenodes(fname, filerevlog)
1656 prune_filenodes(fname, filerevlog)
1653 msng_filenode_lst = msng_filenode_set[fname].keys()
1657 msng_filenode_lst = msng_filenode_set[fname].keys()
1654 else:
1658 else:
1655 msng_filenode_lst = []
1659 msng_filenode_lst = []
1656 # If any filenodes are left, generate the group for them,
1660 # If any filenodes are left, generate the group for them,
1657 # otherwise don't bother.
1661 # otherwise don't bother.
1658 if len(msng_filenode_lst) > 0:
1662 if len(msng_filenode_lst) > 0:
1659 yield changegroup.genchunk(fname)
1663 yield changegroup.genchunk(fname)
1660 # Sort the filenodes by their revision #
1664 # Sort the filenodes by their revision #
1661 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1665 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1662 # Create a group generator and only pass in a changenode
1666 # Create a group generator and only pass in a changenode
1663 # lookup function as we need to collect no information
1667 # lookup function as we need to collect no information
1664 # from filenodes.
1668 # from filenodes.
1665 group = filerevlog.group(msng_filenode_lst,
1669 group = filerevlog.group(msng_filenode_lst,
1666 lookup_filenode_link_func(fname))
1670 lookup_filenode_link_func(fname))
1667 for chnk in group:
1671 for chnk in group:
1668 yield chnk
1672 yield chnk
1669 if msng_filenode_set.has_key(fname):
1673 if msng_filenode_set.has_key(fname):
1670 # Don't need this anymore, toss it to free memory.
1674 # Don't need this anymore, toss it to free memory.
1671 del msng_filenode_set[fname]
1675 del msng_filenode_set[fname]
1672 # Signal that no more groups are left.
1676 # Signal that no more groups are left.
1673 yield changegroup.closechunk()
1677 yield changegroup.closechunk()
1674
1678
1675 if msng_cl_lst:
1679 if msng_cl_lst:
1676 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1680 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1677
1681
1678 return util.chunkbuffer(gengroup())
1682 return util.chunkbuffer(gengroup())
1679
1683
1680 def changegroup(self, basenodes, source):
1684 def changegroup(self, basenodes, source):
1681 """Generate a changegroup of all nodes that we have that a recipient
1685 """Generate a changegroup of all nodes that we have that a recipient
1682 doesn't.
1686 doesn't.
1683
1687
1684 This is much easier than the previous function as we can assume that
1688 This is much easier than the previous function as we can assume that
1685 the recipient has any changenode we aren't sending them."""
1689 the recipient has any changenode we aren't sending them."""
1686
1690
1687 self.hook('preoutgoing', throw=True, source=source)
1691 self.hook('preoutgoing', throw=True, source=source)
1688
1692
1689 cl = self.changelog
1693 cl = self.changelog
1690 nodes = cl.nodesbetween(basenodes, None)[0]
1694 nodes = cl.nodesbetween(basenodes, None)[0]
1691 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1695 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1692 self.changegroupinfo(nodes)
1696 self.changegroupinfo(nodes)
1693
1697
1694 def identity(x):
1698 def identity(x):
1695 return x
1699 return x
1696
1700
1697 def gennodelst(revlog):
1701 def gennodelst(revlog):
1698 for r in xrange(0, revlog.count()):
1702 for r in xrange(0, revlog.count()):
1699 n = revlog.node(r)
1703 n = revlog.node(r)
1700 if revlog.linkrev(n) in revset:
1704 if revlog.linkrev(n) in revset:
1701 yield n
1705 yield n
1702
1706
1703 def changed_file_collector(changedfileset):
1707 def changed_file_collector(changedfileset):
1704 def collect_changed_files(clnode):
1708 def collect_changed_files(clnode):
1705 c = cl.read(clnode)
1709 c = cl.read(clnode)
1706 for fname in c[3]:
1710 for fname in c[3]:
1707 changedfileset[fname] = 1
1711 changedfileset[fname] = 1
1708 return collect_changed_files
1712 return collect_changed_files
1709
1713
1710 def lookuprevlink_func(revlog):
1714 def lookuprevlink_func(revlog):
1711 def lookuprevlink(n):
1715 def lookuprevlink(n):
1712 return cl.node(revlog.linkrev(n))
1716 return cl.node(revlog.linkrev(n))
1713 return lookuprevlink
1717 return lookuprevlink
1714
1718
1715 def gengroup():
1719 def gengroup():
1716 # construct a list of all changed files
1720 # construct a list of all changed files
1717 changedfiles = {}
1721 changedfiles = {}
1718
1722
1719 for chnk in cl.group(nodes, identity,
1723 for chnk in cl.group(nodes, identity,
1720 changed_file_collector(changedfiles)):
1724 changed_file_collector(changedfiles)):
1721 yield chnk
1725 yield chnk
1722 changedfiles = changedfiles.keys()
1726 changedfiles = changedfiles.keys()
1723 changedfiles.sort()
1727 changedfiles.sort()
1724
1728
1725 mnfst = self.manifest
1729 mnfst = self.manifest
1726 nodeiter = gennodelst(mnfst)
1730 nodeiter = gennodelst(mnfst)
1727 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1731 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1728 yield chnk
1732 yield chnk
1729
1733
1730 for fname in changedfiles:
1734 for fname in changedfiles:
1731 filerevlog = self.file(fname)
1735 filerevlog = self.file(fname)
1732 nodeiter = gennodelst(filerevlog)
1736 nodeiter = gennodelst(filerevlog)
1733 nodeiter = list(nodeiter)
1737 nodeiter = list(nodeiter)
1734 if nodeiter:
1738 if nodeiter:
1735 yield changegroup.genchunk(fname)
1739 yield changegroup.genchunk(fname)
1736 lookup = lookuprevlink_func(filerevlog)
1740 lookup = lookuprevlink_func(filerevlog)
1737 for chnk in filerevlog.group(nodeiter, lookup):
1741 for chnk in filerevlog.group(nodeiter, lookup):
1738 yield chnk
1742 yield chnk
1739
1743
1740 yield changegroup.closechunk()
1744 yield changegroup.closechunk()
1741
1745
1742 if nodes:
1746 if nodes:
1743 self.hook('outgoing', node=hex(nodes[0]), source=source)
1747 self.hook('outgoing', node=hex(nodes[0]), source=source)
1744
1748
1745 return util.chunkbuffer(gengroup())
1749 return util.chunkbuffer(gengroup())
1746
1750
1747 def addchangegroup(self, source, srctype, url):
1751 def addchangegroup(self, source, srctype, url):
1748 """add changegroup to repo.
1752 """add changegroup to repo.
1749
1753
1750 return values:
1754 return values:
1751 - nothing changed or no source: 0
1755 - nothing changed or no source: 0
1752 - more heads than before: 1+added heads (2..n)
1756 - more heads than before: 1+added heads (2..n)
1753 - less heads than before: -1-removed heads (-2..-n)
1757 - less heads than before: -1-removed heads (-2..-n)
1754 - number of heads stays the same: 1
1758 - number of heads stays the same: 1
1755 """
1759 """
1756 def csmap(x):
1760 def csmap(x):
1757 self.ui.debug(_("add changeset %s\n") % short(x))
1761 self.ui.debug(_("add changeset %s\n") % short(x))
1758 return cl.count()
1762 return cl.count()
1759
1763
1760 def revmap(x):
1764 def revmap(x):
1761 return cl.rev(x)
1765 return cl.rev(x)
1762
1766
1763 if not source:
1767 if not source:
1764 return 0
1768 return 0
1765
1769
1766 self.hook('prechangegroup', throw=True, source=srctype, url=url)
1770 self.hook('prechangegroup', throw=True, source=srctype, url=url)
1767
1771
1768 changesets = files = revisions = 0
1772 changesets = files = revisions = 0
1769
1773
1770 tr = self.transaction()
1774 tr = self.transaction()
1771
1775
1772 # write changelog data to temp files so concurrent readers will not see
1776 # write changelog data to temp files so concurrent readers will not see
1773 # inconsistent view
1777 # inconsistent view
1774 cl = None
1778 cl = None
1775 try:
1779 try:
1776 cl = appendfile.appendchangelog(self.sopener,
1780 cl = appendfile.appendchangelog(self.sopener,
1777 self.changelog.version)
1781 self.changelog.version)
1778
1782
1779 oldheads = len(cl.heads())
1783 oldheads = len(cl.heads())
1780
1784
1781 # pull off the changeset group
1785 # pull off the changeset group
1782 self.ui.status(_("adding changesets\n"))
1786 self.ui.status(_("adding changesets\n"))
1783 cor = cl.count() - 1
1787 cor = cl.count() - 1
1784 chunkiter = changegroup.chunkiter(source)
1788 chunkiter = changegroup.chunkiter(source)
1785 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1789 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1786 raise util.Abort(_("received changelog group is empty"))
1790 raise util.Abort(_("received changelog group is empty"))
1787 cnr = cl.count() - 1
1791 cnr = cl.count() - 1
1788 changesets = cnr - cor
1792 changesets = cnr - cor
1789
1793
1790 # pull off the manifest group
1794 # pull off the manifest group
1791 self.ui.status(_("adding manifests\n"))
1795 self.ui.status(_("adding manifests\n"))
1792 chunkiter = changegroup.chunkiter(source)
1796 chunkiter = changegroup.chunkiter(source)
1793 # no need to check for empty manifest group here:
1797 # no need to check for empty manifest group here:
1794 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1798 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1795 # no new manifest will be created and the manifest group will
1799 # no new manifest will be created and the manifest group will
1796 # be empty during the pull
1800 # be empty during the pull
1797 self.manifest.addgroup(chunkiter, revmap, tr)
1801 self.manifest.addgroup(chunkiter, revmap, tr)
1798
1802
1799 # process the files
1803 # process the files
1800 self.ui.status(_("adding file changes\n"))
1804 self.ui.status(_("adding file changes\n"))
1801 while 1:
1805 while 1:
1802 f = changegroup.getchunk(source)
1806 f = changegroup.getchunk(source)
1803 if not f:
1807 if not f:
1804 break
1808 break
1805 self.ui.debug(_("adding %s revisions\n") % f)
1809 self.ui.debug(_("adding %s revisions\n") % f)
1806 fl = self.file(f)
1810 fl = self.file(f)
1807 o = fl.count()
1811 o = fl.count()
1808 chunkiter = changegroup.chunkiter(source)
1812 chunkiter = changegroup.chunkiter(source)
1809 if fl.addgroup(chunkiter, revmap, tr) is None:
1813 if fl.addgroup(chunkiter, revmap, tr) is None:
1810 raise util.Abort(_("received file revlog group is empty"))
1814 raise util.Abort(_("received file revlog group is empty"))
1811 revisions += fl.count() - o
1815 revisions += fl.count() - o
1812 files += 1
1816 files += 1
1813
1817
1814 cl.writedata()
1818 cl.writedata()
1815 finally:
1819 finally:
1816 if cl:
1820 if cl:
1817 cl.cleanup()
1821 cl.cleanup()
1818
1822
1819 # make changelog see real files again
1823 # make changelog see real files again
1820 self.changelog = changelog.changelog(self.sopener,
1824 self.changelog = changelog.changelog(self.sopener,
1821 self.changelog.version)
1825 self.changelog.version)
1822 self.changelog.checkinlinesize(tr)
1826 self.changelog.checkinlinesize(tr)
1823
1827
1824 newheads = len(self.changelog.heads())
1828 newheads = len(self.changelog.heads())
1825 heads = ""
1829 heads = ""
1826 if oldheads and newheads != oldheads:
1830 if oldheads and newheads != oldheads:
1827 heads = _(" (%+d heads)") % (newheads - oldheads)
1831 heads = _(" (%+d heads)") % (newheads - oldheads)
1828
1832
1829 self.ui.status(_("added %d changesets"
1833 self.ui.status(_("added %d changesets"
1830 " with %d changes to %d files%s\n")
1834 " with %d changes to %d files%s\n")
1831 % (changesets, revisions, files, heads))
1835 % (changesets, revisions, files, heads))
1832
1836
1833 if changesets > 0:
1837 if changesets > 0:
1834 self.hook('pretxnchangegroup', throw=True,
1838 self.hook('pretxnchangegroup', throw=True,
1835 node=hex(self.changelog.node(cor+1)), source=srctype,
1839 node=hex(self.changelog.node(cor+1)), source=srctype,
1836 url=url)
1840 url=url)
1837
1841
1838 tr.close()
1842 tr.close()
1839
1843
1840 if changesets > 0:
1844 if changesets > 0:
1841 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1845 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1842 source=srctype, url=url)
1846 source=srctype, url=url)
1843
1847
1844 for i in xrange(cor + 1, cnr + 1):
1848 for i in xrange(cor + 1, cnr + 1):
1845 self.hook("incoming", node=hex(self.changelog.node(i)),
1849 self.hook("incoming", node=hex(self.changelog.node(i)),
1846 source=srctype, url=url)
1850 source=srctype, url=url)
1847
1851
1848 # never return 0 here:
1852 # never return 0 here:
1849 if newheads < oldheads:
1853 if newheads < oldheads:
1850 return newheads - oldheads - 1
1854 return newheads - oldheads - 1
1851 else:
1855 else:
1852 return newheads - oldheads + 1
1856 return newheads - oldheads + 1
1853
1857
1854
1858
1855 def stream_in(self, remote):
1859 def stream_in(self, remote):
1856 fp = remote.stream_out()
1860 fp = remote.stream_out()
1857 l = fp.readline()
1861 l = fp.readline()
1858 try:
1862 try:
1859 resp = int(l)
1863 resp = int(l)
1860 except ValueError:
1864 except ValueError:
1861 raise util.UnexpectedOutput(
1865 raise util.UnexpectedOutput(
1862 _('Unexpected response from remote server:'), l)
1866 _('Unexpected response from remote server:'), l)
1863 if resp == 1:
1867 if resp == 1:
1864 raise util.Abort(_('operation forbidden by server'))
1868 raise util.Abort(_('operation forbidden by server'))
1865 elif resp == 2:
1869 elif resp == 2:
1866 raise util.Abort(_('locking the remote repository failed'))
1870 raise util.Abort(_('locking the remote repository failed'))
1867 elif resp != 0:
1871 elif resp != 0:
1868 raise util.Abort(_('the server sent an unknown error code'))
1872 raise util.Abort(_('the server sent an unknown error code'))
1869 self.ui.status(_('streaming all changes\n'))
1873 self.ui.status(_('streaming all changes\n'))
1870 l = fp.readline()
1874 l = fp.readline()
1871 try:
1875 try:
1872 total_files, total_bytes = map(int, l.split(' ', 1))
1876 total_files, total_bytes = map(int, l.split(' ', 1))
1873 except ValueError, TypeError:
1877 except ValueError, TypeError:
1874 raise util.UnexpectedOutput(
1878 raise util.UnexpectedOutput(
1875 _('Unexpected response from remote server:'), l)
1879 _('Unexpected response from remote server:'), l)
1876 self.ui.status(_('%d files to transfer, %s of data\n') %
1880 self.ui.status(_('%d files to transfer, %s of data\n') %
1877 (total_files, util.bytecount(total_bytes)))
1881 (total_files, util.bytecount(total_bytes)))
1878 start = time.time()
1882 start = time.time()
1879 for i in xrange(total_files):
1883 for i in xrange(total_files):
1880 # XXX doesn't support '\n' or '\r' in filenames
1884 # XXX doesn't support '\n' or '\r' in filenames
1881 l = fp.readline()
1885 l = fp.readline()
1882 try:
1886 try:
1883 name, size = l.split('\0', 1)
1887 name, size = l.split('\0', 1)
1884 size = int(size)
1888 size = int(size)
1885 except ValueError, TypeError:
1889 except ValueError, TypeError:
1886 raise util.UnexpectedOutput(
1890 raise util.UnexpectedOutput(
1887 _('Unexpected response from remote server:'), l)
1891 _('Unexpected response from remote server:'), l)
1888 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
1892 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
1889 ofp = self.sopener(name, 'w')
1893 ofp = self.sopener(name, 'w')
1890 for chunk in util.filechunkiter(fp, limit=size):
1894 for chunk in util.filechunkiter(fp, limit=size):
1891 ofp.write(chunk)
1895 ofp.write(chunk)
1892 ofp.close()
1896 ofp.close()
1893 elapsed = time.time() - start
1897 elapsed = time.time() - start
1894 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1898 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1895 (util.bytecount(total_bytes), elapsed,
1899 (util.bytecount(total_bytes), elapsed,
1896 util.bytecount(total_bytes / elapsed)))
1900 util.bytecount(total_bytes / elapsed)))
1897 self.reload()
1901 self.reload()
1898 return len(self.heads()) + 1
1902 return len(self.heads()) + 1
1899
1903
1900 def clone(self, remote, heads=[], stream=False):
1904 def clone(self, remote, heads=[], stream=False):
1901 '''clone remote repository.
1905 '''clone remote repository.
1902
1906
1903 keyword arguments:
1907 keyword arguments:
1904 heads: list of revs to clone (forces use of pull)
1908 heads: list of revs to clone (forces use of pull)
1905 stream: use streaming clone if possible'''
1909 stream: use streaming clone if possible'''
1906
1910
1907 # now, all clients that can request uncompressed clones can
1911 # now, all clients that can request uncompressed clones can
1908 # read repo formats supported by all servers that can serve
1912 # read repo formats supported by all servers that can serve
1909 # them.
1913 # them.
1910
1914
1911 # if revlog format changes, client will have to check version
1915 # if revlog format changes, client will have to check version
1912 # and format flags on "stream" capability, and use
1916 # and format flags on "stream" capability, and use
1913 # uncompressed only if compatible.
1917 # uncompressed only if compatible.
1914
1918
1915 if stream and not heads and remote.capable('stream'):
1919 if stream and not heads and remote.capable('stream'):
1916 return self.stream_in(remote)
1920 return self.stream_in(remote)
1917 return self.pull(remote, heads)
1921 return self.pull(remote, heads)
1918
1922
1919 # used to avoid circular references so destructors work
1923 # used to avoid circular references so destructors work
1920 def aftertrans(files):
1924 def aftertrans(files):
1921 renamefiles = [tuple(t) for t in files]
1925 renamefiles = [tuple(t) for t in files]
1922 def a():
1926 def a():
1923 for src, dest in renamefiles:
1927 for src, dest in renamefiles:
1924 util.rename(src, dest)
1928 util.rename(src, dest)
1925 return a
1929 return a
1926
1930
1927 def instance(ui, path, create):
1931 def instance(ui, path, create):
1928 return localrepository(ui, util.drop_scheme('file', path), create)
1932 return localrepository(ui, util.drop_scheme('file', path), create)
1929
1933
1930 def islocal(path):
1934 def islocal(path):
1931 return True
1935 return True
@@ -1,1284 +1,1285
1 """
1 """
2 util.py - Mercurial utility functions and platform specfic implementations
2 util.py - Mercurial utility functions and platform specfic implementations
3
3
4 Copyright 2005 K. Thananchayan <thananck@yahoo.com>
4 Copyright 2005 K. Thananchayan <thananck@yahoo.com>
5 Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
5 Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
6 Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
6 Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
7
7
8 This software may be used and distributed according to the terms
8 This software may be used and distributed according to the terms
9 of the GNU General Public License, incorporated herein by reference.
9 of the GNU General Public License, incorporated herein by reference.
10
10
11 This contains helper routines that are independent of the SCM core and hide
11 This contains helper routines that are independent of the SCM core and hide
12 platform-specific details from the core.
12 platform-specific details from the core.
13 """
13 """
14
14
15 from i18n import gettext as _
15 from i18n import gettext as _
16 from demandload import *
16 from demandload import *
17 demandload(globals(), "cStringIO errno getpass popen2 re shutil sys tempfile")
17 demandload(globals(), "cStringIO errno getpass popen2 re shutil sys tempfile")
18 demandload(globals(), "os threading time calendar ConfigParser locale")
18 demandload(globals(), "os threading time calendar ConfigParser locale")
19
19
20 _encoding = os.environ.get("HGENCODING") or locale.getpreferredencoding()
20 _encoding = os.environ.get("HGENCODING") or locale.getpreferredencoding()
21 _encodingmode = os.environ.get("HGENCODINGMODE", "strict")
21 _encodingmode = os.environ.get("HGENCODINGMODE", "strict")
22 _fallbackencoding = 'ISO-8859-1'
22
23
23 def tolocal(s):
24 def tolocal(s):
24 """
25 """
25 Convert a string from internal UTF-8 to local encoding
26 Convert a string from internal UTF-8 to local encoding
26
27
27 All internal strings should be UTF-8 but some repos before the
28 All internal strings should be UTF-8 but some repos before the
28 implementation of locale support may contain latin1 or possibly
29 implementation of locale support may contain latin1 or possibly
29 other character sets. We attempt to decode everything strictly
30 other character sets. We attempt to decode everything strictly
30 using UTF-8, then Latin-1, and failing that, we use UTF-8 and
31 using UTF-8, then Latin-1, and failing that, we use UTF-8 and
31 replace unknown characters.
32 replace unknown characters.
32 """
33 """
33 for e in "utf-8 latin1".split():
34 for e in ('UTF-8', _fallbackencoding):
34 try:
35 try:
35 u = s.decode(e) # attempt strict decoding
36 u = s.decode(e) # attempt strict decoding
36 return u.encode(_encoding, "replace")
37 return u.encode(_encoding, "replace")
37 except UnicodeDecodeError:
38 except UnicodeDecodeError:
38 pass
39 pass
39 u = s.decode("utf-8", "replace") # last ditch
40 u = s.decode("utf-8", "replace") # last ditch
40 return u.encode(_encoding, "replace")
41 return u.encode(_encoding, "replace")
41
42
42 def fromlocal(s):
43 def fromlocal(s):
43 """
44 """
44 Convert a string from the local character encoding to UTF-8
45 Convert a string from the local character encoding to UTF-8
45
46
46 We attempt to decode strings using the encoding mode set by
47 We attempt to decode strings using the encoding mode set by
47 HG_ENCODINGMODE, which defaults to 'strict'. In this mode, unknown
48 HG_ENCODINGMODE, which defaults to 'strict'. In this mode, unknown
48 characters will cause an error message. Other modes include
49 characters will cause an error message. Other modes include
49 'replace', which replaces unknown characters with a special
50 'replace', which replaces unknown characters with a special
50 Unicode character, and 'ignore', which drops the character.
51 Unicode character, and 'ignore', which drops the character.
51 """
52 """
52 try:
53 try:
53 return s.decode(_encoding, _encodingmode).encode("utf-8")
54 return s.decode(_encoding, _encodingmode).encode("utf-8")
54 except UnicodeDecodeError, inst:
55 except UnicodeDecodeError, inst:
55 sub = s[max(0, inst.start-10):inst.start+10]
56 sub = s[max(0, inst.start-10):inst.start+10]
56 raise Abort("decoding near '%s': %s!\n" % (sub, inst))
57 raise Abort("decoding near '%s': %s!\n" % (sub, inst))
57
58
58 def locallen(s):
59 def locallen(s):
59 """Find the length in characters of a local string"""
60 """Find the length in characters of a local string"""
60 return len(s.decode(_encoding, "replace"))
61 return len(s.decode(_encoding, "replace"))
61
62
62 def localsub(s, a, b=None):
63 def localsub(s, a, b=None):
63 try:
64 try:
64 u = s.decode(_encoding, _encodingmode)
65 u = s.decode(_encoding, _encodingmode)
65 if b is not None:
66 if b is not None:
66 u = u[a:b]
67 u = u[a:b]
67 else:
68 else:
68 u = u[:a]
69 u = u[:a]
69 return u.encode(_encoding, _encodingmode)
70 return u.encode(_encoding, _encodingmode)
70 except UnicodeDecodeError, inst:
71 except UnicodeDecodeError, inst:
71 sub = s[max(0, inst.start-10), inst.start+10]
72 sub = s[max(0, inst.start-10), inst.start+10]
72 raise Abort("decoding near '%s': %s!\n" % (sub, inst))
73 raise Abort("decoding near '%s': %s!\n" % (sub, inst))
73
74
74 # used by parsedate
75 # used by parsedate
75 defaultdateformats = (
76 defaultdateformats = (
76 '%Y-%m-%d %H:%M:%S',
77 '%Y-%m-%d %H:%M:%S',
77 '%Y-%m-%d %I:%M:%S%p',
78 '%Y-%m-%d %I:%M:%S%p',
78 '%Y-%m-%d %H:%M',
79 '%Y-%m-%d %H:%M',
79 '%Y-%m-%d %I:%M%p',
80 '%Y-%m-%d %I:%M%p',
80 '%Y-%m-%d',
81 '%Y-%m-%d',
81 '%m-%d',
82 '%m-%d',
82 '%m/%d',
83 '%m/%d',
83 '%m/%d/%y',
84 '%m/%d/%y',
84 '%m/%d/%Y',
85 '%m/%d/%Y',
85 '%a %b %d %H:%M:%S %Y',
86 '%a %b %d %H:%M:%S %Y',
86 '%a %b %d %I:%M:%S%p %Y',
87 '%a %b %d %I:%M:%S%p %Y',
87 '%b %d %H:%M:%S %Y',
88 '%b %d %H:%M:%S %Y',
88 '%b %d %I:%M:%S%p %Y',
89 '%b %d %I:%M:%S%p %Y',
89 '%b %d %H:%M:%S',
90 '%b %d %H:%M:%S',
90 '%b %d %I:%M:%S%p',
91 '%b %d %I:%M:%S%p',
91 '%b %d %H:%M',
92 '%b %d %H:%M',
92 '%b %d %I:%M%p',
93 '%b %d %I:%M%p',
93 '%b %d %Y',
94 '%b %d %Y',
94 '%b %d',
95 '%b %d',
95 '%H:%M:%S',
96 '%H:%M:%S',
96 '%I:%M:%SP',
97 '%I:%M:%SP',
97 '%H:%M',
98 '%H:%M',
98 '%I:%M%p',
99 '%I:%M%p',
99 )
100 )
100
101
101 extendeddateformats = defaultdateformats + (
102 extendeddateformats = defaultdateformats + (
102 "%Y",
103 "%Y",
103 "%Y-%m",
104 "%Y-%m",
104 "%b",
105 "%b",
105 "%b %Y",
106 "%b %Y",
106 )
107 )
107
108
108 class SignalInterrupt(Exception):
109 class SignalInterrupt(Exception):
109 """Exception raised on SIGTERM and SIGHUP."""
110 """Exception raised on SIGTERM and SIGHUP."""
110
111
111 # like SafeConfigParser but with case-sensitive keys
112 # like SafeConfigParser but with case-sensitive keys
112 class configparser(ConfigParser.SafeConfigParser):
113 class configparser(ConfigParser.SafeConfigParser):
113 def optionxform(self, optionstr):
114 def optionxform(self, optionstr):
114 return optionstr
115 return optionstr
115
116
116 def cachefunc(func):
117 def cachefunc(func):
117 '''cache the result of function calls'''
118 '''cache the result of function calls'''
118 # XXX doesn't handle keywords args
119 # XXX doesn't handle keywords args
119 cache = {}
120 cache = {}
120 if func.func_code.co_argcount == 1:
121 if func.func_code.co_argcount == 1:
121 # we gain a small amount of time because
122 # we gain a small amount of time because
122 # we don't need to pack/unpack the list
123 # we don't need to pack/unpack the list
123 def f(arg):
124 def f(arg):
124 if arg not in cache:
125 if arg not in cache:
125 cache[arg] = func(arg)
126 cache[arg] = func(arg)
126 return cache[arg]
127 return cache[arg]
127 else:
128 else:
128 def f(*args):
129 def f(*args):
129 if args not in cache:
130 if args not in cache:
130 cache[args] = func(*args)
131 cache[args] = func(*args)
131 return cache[args]
132 return cache[args]
132
133
133 return f
134 return f
134
135
135 def pipefilter(s, cmd):
136 def pipefilter(s, cmd):
136 '''filter string S through command CMD, returning its output'''
137 '''filter string S through command CMD, returning its output'''
137 (pout, pin) = popen2.popen2(cmd, -1, 'b')
138 (pout, pin) = popen2.popen2(cmd, -1, 'b')
138 def writer():
139 def writer():
139 try:
140 try:
140 pin.write(s)
141 pin.write(s)
141 pin.close()
142 pin.close()
142 except IOError, inst:
143 except IOError, inst:
143 if inst.errno != errno.EPIPE:
144 if inst.errno != errno.EPIPE:
144 raise
145 raise
145
146
146 # we should use select instead on UNIX, but this will work on most
147 # we should use select instead on UNIX, but this will work on most
147 # systems, including Windows
148 # systems, including Windows
148 w = threading.Thread(target=writer)
149 w = threading.Thread(target=writer)
149 w.start()
150 w.start()
150 f = pout.read()
151 f = pout.read()
151 pout.close()
152 pout.close()
152 w.join()
153 w.join()
153 return f
154 return f
154
155
155 def tempfilter(s, cmd):
156 def tempfilter(s, cmd):
156 '''filter string S through a pair of temporary files with CMD.
157 '''filter string S through a pair of temporary files with CMD.
157 CMD is used as a template to create the real command to be run,
158 CMD is used as a template to create the real command to be run,
158 with the strings INFILE and OUTFILE replaced by the real names of
159 with the strings INFILE and OUTFILE replaced by the real names of
159 the temporary files generated.'''
160 the temporary files generated.'''
160 inname, outname = None, None
161 inname, outname = None, None
161 try:
162 try:
162 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
163 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
163 fp = os.fdopen(infd, 'wb')
164 fp = os.fdopen(infd, 'wb')
164 fp.write(s)
165 fp.write(s)
165 fp.close()
166 fp.close()
166 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
167 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
167 os.close(outfd)
168 os.close(outfd)
168 cmd = cmd.replace('INFILE', inname)
169 cmd = cmd.replace('INFILE', inname)
169 cmd = cmd.replace('OUTFILE', outname)
170 cmd = cmd.replace('OUTFILE', outname)
170 code = os.system(cmd)
171 code = os.system(cmd)
171 if code: raise Abort(_("command '%s' failed: %s") %
172 if code: raise Abort(_("command '%s' failed: %s") %
172 (cmd, explain_exit(code)))
173 (cmd, explain_exit(code)))
173 return open(outname, 'rb').read()
174 return open(outname, 'rb').read()
174 finally:
175 finally:
175 try:
176 try:
176 if inname: os.unlink(inname)
177 if inname: os.unlink(inname)
177 except: pass
178 except: pass
178 try:
179 try:
179 if outname: os.unlink(outname)
180 if outname: os.unlink(outname)
180 except: pass
181 except: pass
181
182
182 filtertable = {
183 filtertable = {
183 'tempfile:': tempfilter,
184 'tempfile:': tempfilter,
184 'pipe:': pipefilter,
185 'pipe:': pipefilter,
185 }
186 }
186
187
187 def filter(s, cmd):
188 def filter(s, cmd):
188 "filter a string through a command that transforms its input to its output"
189 "filter a string through a command that transforms its input to its output"
189 for name, fn in filtertable.iteritems():
190 for name, fn in filtertable.iteritems():
190 if cmd.startswith(name):
191 if cmd.startswith(name):
191 return fn(s, cmd[len(name):].lstrip())
192 return fn(s, cmd[len(name):].lstrip())
192 return pipefilter(s, cmd)
193 return pipefilter(s, cmd)
193
194
194 def find_in_path(name, path, default=None):
195 def find_in_path(name, path, default=None):
195 '''find name in search path. path can be string (will be split
196 '''find name in search path. path can be string (will be split
196 with os.pathsep), or iterable thing that returns strings. if name
197 with os.pathsep), or iterable thing that returns strings. if name
197 found, return path to name. else return default.'''
198 found, return path to name. else return default.'''
198 if isinstance(path, str):
199 if isinstance(path, str):
199 path = path.split(os.pathsep)
200 path = path.split(os.pathsep)
200 for p in path:
201 for p in path:
201 p_name = os.path.join(p, name)
202 p_name = os.path.join(p, name)
202 if os.path.exists(p_name):
203 if os.path.exists(p_name):
203 return p_name
204 return p_name
204 return default
205 return default
205
206
206 def binary(s):
207 def binary(s):
207 """return true if a string is binary data using diff's heuristic"""
208 """return true if a string is binary data using diff's heuristic"""
208 if s and '\0' in s[:4096]:
209 if s and '\0' in s[:4096]:
209 return True
210 return True
210 return False
211 return False
211
212
212 def unique(g):
213 def unique(g):
213 """return the uniq elements of iterable g"""
214 """return the uniq elements of iterable g"""
214 seen = {}
215 seen = {}
215 l = []
216 l = []
216 for f in g:
217 for f in g:
217 if f not in seen:
218 if f not in seen:
218 seen[f] = 1
219 seen[f] = 1
219 l.append(f)
220 l.append(f)
220 return l
221 return l
221
222
222 class Abort(Exception):
223 class Abort(Exception):
223 """Raised if a command needs to print an error and exit."""
224 """Raised if a command needs to print an error and exit."""
224
225
225 class UnexpectedOutput(Abort):
226 class UnexpectedOutput(Abort):
226 """Raised to print an error with part of output and exit."""
227 """Raised to print an error with part of output and exit."""
227
228
228 def always(fn): return True
229 def always(fn): return True
229 def never(fn): return False
230 def never(fn): return False
230
231
231 def patkind(name, dflt_pat='glob'):
232 def patkind(name, dflt_pat='glob'):
232 """Split a string into an optional pattern kind prefix and the
233 """Split a string into an optional pattern kind prefix and the
233 actual pattern."""
234 actual pattern."""
234 for prefix in 're', 'glob', 'path', 'relglob', 'relpath', 'relre':
235 for prefix in 're', 'glob', 'path', 'relglob', 'relpath', 'relre':
235 if name.startswith(prefix + ':'): return name.split(':', 1)
236 if name.startswith(prefix + ':'): return name.split(':', 1)
236 return dflt_pat, name
237 return dflt_pat, name
237
238
238 def globre(pat, head='^', tail='$'):
239 def globre(pat, head='^', tail='$'):
239 "convert a glob pattern into a regexp"
240 "convert a glob pattern into a regexp"
240 i, n = 0, len(pat)
241 i, n = 0, len(pat)
241 res = ''
242 res = ''
242 group = False
243 group = False
243 def peek(): return i < n and pat[i]
244 def peek(): return i < n and pat[i]
244 while i < n:
245 while i < n:
245 c = pat[i]
246 c = pat[i]
246 i = i+1
247 i = i+1
247 if c == '*':
248 if c == '*':
248 if peek() == '*':
249 if peek() == '*':
249 i += 1
250 i += 1
250 res += '.*'
251 res += '.*'
251 else:
252 else:
252 res += '[^/]*'
253 res += '[^/]*'
253 elif c == '?':
254 elif c == '?':
254 res += '.'
255 res += '.'
255 elif c == '[':
256 elif c == '[':
256 j = i
257 j = i
257 if j < n and pat[j] in '!]':
258 if j < n and pat[j] in '!]':
258 j += 1
259 j += 1
259 while j < n and pat[j] != ']':
260 while j < n and pat[j] != ']':
260 j += 1
261 j += 1
261 if j >= n:
262 if j >= n:
262 res += '\\['
263 res += '\\['
263 else:
264 else:
264 stuff = pat[i:j].replace('\\','\\\\')
265 stuff = pat[i:j].replace('\\','\\\\')
265 i = j + 1
266 i = j + 1
266 if stuff[0] == '!':
267 if stuff[0] == '!':
267 stuff = '^' + stuff[1:]
268 stuff = '^' + stuff[1:]
268 elif stuff[0] == '^':
269 elif stuff[0] == '^':
269 stuff = '\\' + stuff
270 stuff = '\\' + stuff
270 res = '%s[%s]' % (res, stuff)
271 res = '%s[%s]' % (res, stuff)
271 elif c == '{':
272 elif c == '{':
272 group = True
273 group = True
273 res += '(?:'
274 res += '(?:'
274 elif c == '}' and group:
275 elif c == '}' and group:
275 res += ')'
276 res += ')'
276 group = False
277 group = False
277 elif c == ',' and group:
278 elif c == ',' and group:
278 res += '|'
279 res += '|'
279 elif c == '\\':
280 elif c == '\\':
280 p = peek()
281 p = peek()
281 if p:
282 if p:
282 i += 1
283 i += 1
283 res += re.escape(p)
284 res += re.escape(p)
284 else:
285 else:
285 res += re.escape(c)
286 res += re.escape(c)
286 else:
287 else:
287 res += re.escape(c)
288 res += re.escape(c)
288 return head + res + tail
289 return head + res + tail
289
290
290 _globchars = {'[': 1, '{': 1, '*': 1, '?': 1}
291 _globchars = {'[': 1, '{': 1, '*': 1, '?': 1}
291
292
292 def pathto(n1, n2):
293 def pathto(n1, n2):
293 '''return the relative path from one place to another.
294 '''return the relative path from one place to another.
294 n1 should use os.sep to separate directories
295 n1 should use os.sep to separate directories
295 n2 should use "/" to separate directories
296 n2 should use "/" to separate directories
296 returns an os.sep-separated path.
297 returns an os.sep-separated path.
297 '''
298 '''
298 if not n1: return localpath(n2)
299 if not n1: return localpath(n2)
299 a, b = n1.split(os.sep), n2.split('/')
300 a, b = n1.split(os.sep), n2.split('/')
300 a.reverse()
301 a.reverse()
301 b.reverse()
302 b.reverse()
302 while a and b and a[-1] == b[-1]:
303 while a and b and a[-1] == b[-1]:
303 a.pop()
304 a.pop()
304 b.pop()
305 b.pop()
305 b.reverse()
306 b.reverse()
306 return os.sep.join((['..'] * len(a)) + b)
307 return os.sep.join((['..'] * len(a)) + b)
307
308
308 def canonpath(root, cwd, myname):
309 def canonpath(root, cwd, myname):
309 """return the canonical path of myname, given cwd and root"""
310 """return the canonical path of myname, given cwd and root"""
310 if root == os.sep:
311 if root == os.sep:
311 rootsep = os.sep
312 rootsep = os.sep
312 elif root.endswith(os.sep):
313 elif root.endswith(os.sep):
313 rootsep = root
314 rootsep = root
314 else:
315 else:
315 rootsep = root + os.sep
316 rootsep = root + os.sep
316 name = myname
317 name = myname
317 if not os.path.isabs(name):
318 if not os.path.isabs(name):
318 name = os.path.join(root, cwd, name)
319 name = os.path.join(root, cwd, name)
319 name = os.path.normpath(name)
320 name = os.path.normpath(name)
320 if name != rootsep and name.startswith(rootsep):
321 if name != rootsep and name.startswith(rootsep):
321 name = name[len(rootsep):]
322 name = name[len(rootsep):]
322 audit_path(name)
323 audit_path(name)
323 return pconvert(name)
324 return pconvert(name)
324 elif name == root:
325 elif name == root:
325 return ''
326 return ''
326 else:
327 else:
327 # Determine whether `name' is in the hierarchy at or beneath `root',
328 # Determine whether `name' is in the hierarchy at or beneath `root',
328 # by iterating name=dirname(name) until that causes no change (can't
329 # by iterating name=dirname(name) until that causes no change (can't
329 # check name == '/', because that doesn't work on windows). For each
330 # check name == '/', because that doesn't work on windows). For each
330 # `name', compare dev/inode numbers. If they match, the list `rel'
331 # `name', compare dev/inode numbers. If they match, the list `rel'
331 # holds the reversed list of components making up the relative file
332 # holds the reversed list of components making up the relative file
332 # name we want.
333 # name we want.
333 root_st = os.stat(root)
334 root_st = os.stat(root)
334 rel = []
335 rel = []
335 while True:
336 while True:
336 try:
337 try:
337 name_st = os.stat(name)
338 name_st = os.stat(name)
338 except OSError:
339 except OSError:
339 break
340 break
340 if samestat(name_st, root_st):
341 if samestat(name_st, root_st):
341 rel.reverse()
342 rel.reverse()
342 name = os.path.join(*rel)
343 name = os.path.join(*rel)
343 audit_path(name)
344 audit_path(name)
344 return pconvert(name)
345 return pconvert(name)
345 dirname, basename = os.path.split(name)
346 dirname, basename = os.path.split(name)
346 rel.append(basename)
347 rel.append(basename)
347 if dirname == name:
348 if dirname == name:
348 break
349 break
349 name = dirname
350 name = dirname
350
351
351 raise Abort('%s not under root' % myname)
352 raise Abort('%s not under root' % myname)
352
353
353 def matcher(canonroot, cwd='', names=['.'], inc=[], exc=[], head='', src=None):
354 def matcher(canonroot, cwd='', names=['.'], inc=[], exc=[], head='', src=None):
354 return _matcher(canonroot, cwd, names, inc, exc, head, 'glob', src)
355 return _matcher(canonroot, cwd, names, inc, exc, head, 'glob', src)
355
356
356 def cmdmatcher(canonroot, cwd='', names=['.'], inc=[], exc=[], head='', src=None):
357 def cmdmatcher(canonroot, cwd='', names=['.'], inc=[], exc=[], head='', src=None):
357 if os.name == 'nt':
358 if os.name == 'nt':
358 dflt_pat = 'glob'
359 dflt_pat = 'glob'
359 else:
360 else:
360 dflt_pat = 'relpath'
361 dflt_pat = 'relpath'
361 return _matcher(canonroot, cwd, names, inc, exc, head, dflt_pat, src)
362 return _matcher(canonroot, cwd, names, inc, exc, head, dflt_pat, src)
362
363
363 def _matcher(canonroot, cwd, names, inc, exc, head, dflt_pat, src):
364 def _matcher(canonroot, cwd, names, inc, exc, head, dflt_pat, src):
364 """build a function to match a set of file patterns
365 """build a function to match a set of file patterns
365
366
366 arguments:
367 arguments:
367 canonroot - the canonical root of the tree you're matching against
368 canonroot - the canonical root of the tree you're matching against
368 cwd - the current working directory, if relevant
369 cwd - the current working directory, if relevant
369 names - patterns to find
370 names - patterns to find
370 inc - patterns to include
371 inc - patterns to include
371 exc - patterns to exclude
372 exc - patterns to exclude
372 head - a regex to prepend to patterns to control whether a match is rooted
373 head - a regex to prepend to patterns to control whether a match is rooted
373
374
374 a pattern is one of:
375 a pattern is one of:
375 'glob:<rooted glob>'
376 'glob:<rooted glob>'
376 're:<rooted regexp>'
377 're:<rooted regexp>'
377 'path:<rooted path>'
378 'path:<rooted path>'
378 'relglob:<relative glob>'
379 'relglob:<relative glob>'
379 'relpath:<relative path>'
380 'relpath:<relative path>'
380 'relre:<relative regexp>'
381 'relre:<relative regexp>'
381 '<rooted path or regexp>'
382 '<rooted path or regexp>'
382
383
383 returns:
384 returns:
384 a 3-tuple containing
385 a 3-tuple containing
385 - list of explicit non-pattern names passed in
386 - list of explicit non-pattern names passed in
386 - a bool match(filename) function
387 - a bool match(filename) function
387 - a bool indicating if any patterns were passed in
388 - a bool indicating if any patterns were passed in
388
389
389 todo:
390 todo:
390 make head regex a rooted bool
391 make head regex a rooted bool
391 """
392 """
392
393
393 def contains_glob(name):
394 def contains_glob(name):
394 for c in name:
395 for c in name:
395 if c in _globchars: return True
396 if c in _globchars: return True
396 return False
397 return False
397
398
398 def regex(kind, name, tail):
399 def regex(kind, name, tail):
399 '''convert a pattern into a regular expression'''
400 '''convert a pattern into a regular expression'''
400 if kind == 're':
401 if kind == 're':
401 return name
402 return name
402 elif kind == 'path':
403 elif kind == 'path':
403 return '^' + re.escape(name) + '(?:/|$)'
404 return '^' + re.escape(name) + '(?:/|$)'
404 elif kind == 'relglob':
405 elif kind == 'relglob':
405 return head + globre(name, '(?:|.*/)', tail)
406 return head + globre(name, '(?:|.*/)', tail)
406 elif kind == 'relpath':
407 elif kind == 'relpath':
407 return head + re.escape(name) + tail
408 return head + re.escape(name) + tail
408 elif kind == 'relre':
409 elif kind == 'relre':
409 if name.startswith('^'):
410 if name.startswith('^'):
410 return name
411 return name
411 return '.*' + name
412 return '.*' + name
412 return head + globre(name, '', tail)
413 return head + globre(name, '', tail)
413
414
414 def matchfn(pats, tail):
415 def matchfn(pats, tail):
415 """build a matching function from a set of patterns"""
416 """build a matching function from a set of patterns"""
416 if not pats:
417 if not pats:
417 return
418 return
418 matches = []
419 matches = []
419 for k, p in pats:
420 for k, p in pats:
420 try:
421 try:
421 pat = '(?:%s)' % regex(k, p, tail)
422 pat = '(?:%s)' % regex(k, p, tail)
422 matches.append(re.compile(pat).match)
423 matches.append(re.compile(pat).match)
423 except re.error:
424 except re.error:
424 if src: raise Abort("%s: invalid pattern (%s): %s" % (src, k, p))
425 if src: raise Abort("%s: invalid pattern (%s): %s" % (src, k, p))
425 else: raise Abort("invalid pattern (%s): %s" % (k, p))
426 else: raise Abort("invalid pattern (%s): %s" % (k, p))
426
427
427 def buildfn(text):
428 def buildfn(text):
428 for m in matches:
429 for m in matches:
429 r = m(text)
430 r = m(text)
430 if r:
431 if r:
431 return r
432 return r
432
433
433 return buildfn
434 return buildfn
434
435
435 def globprefix(pat):
436 def globprefix(pat):
436 '''return the non-glob prefix of a path, e.g. foo/* -> foo'''
437 '''return the non-glob prefix of a path, e.g. foo/* -> foo'''
437 root = []
438 root = []
438 for p in pat.split(os.sep):
439 for p in pat.split(os.sep):
439 if contains_glob(p): break
440 if contains_glob(p): break
440 root.append(p)
441 root.append(p)
441 return '/'.join(root)
442 return '/'.join(root)
442
443
443 pats = []
444 pats = []
444 files = []
445 files = []
445 roots = []
446 roots = []
446 for kind, name in [patkind(p, dflt_pat) for p in names]:
447 for kind, name in [patkind(p, dflt_pat) for p in names]:
447 if kind in ('glob', 'relpath'):
448 if kind in ('glob', 'relpath'):
448 name = canonpath(canonroot, cwd, name)
449 name = canonpath(canonroot, cwd, name)
449 if name == '':
450 if name == '':
450 kind, name = 'glob', '**'
451 kind, name = 'glob', '**'
451 if kind in ('glob', 'path', 're'):
452 if kind in ('glob', 'path', 're'):
452 pats.append((kind, name))
453 pats.append((kind, name))
453 if kind == 'glob':
454 if kind == 'glob':
454 root = globprefix(name)
455 root = globprefix(name)
455 if root: roots.append(root)
456 if root: roots.append(root)
456 elif kind == 'relpath':
457 elif kind == 'relpath':
457 files.append((kind, name))
458 files.append((kind, name))
458 roots.append(name)
459 roots.append(name)
459
460
460 patmatch = matchfn(pats, '$') or always
461 patmatch = matchfn(pats, '$') or always
461 filematch = matchfn(files, '(?:/|$)') or always
462 filematch = matchfn(files, '(?:/|$)') or always
462 incmatch = always
463 incmatch = always
463 if inc:
464 if inc:
464 inckinds = [patkind(canonpath(canonroot, cwd, i)) for i in inc]
465 inckinds = [patkind(canonpath(canonroot, cwd, i)) for i in inc]
465 incmatch = matchfn(inckinds, '(?:/|$)')
466 incmatch = matchfn(inckinds, '(?:/|$)')
466 excmatch = lambda fn: False
467 excmatch = lambda fn: False
467 if exc:
468 if exc:
468 exckinds = [patkind(canonpath(canonroot, cwd, x)) for x in exc]
469 exckinds = [patkind(canonpath(canonroot, cwd, x)) for x in exc]
469 excmatch = matchfn(exckinds, '(?:/|$)')
470 excmatch = matchfn(exckinds, '(?:/|$)')
470
471
471 return (roots,
472 return (roots,
472 lambda fn: (incmatch(fn) and not excmatch(fn) and
473 lambda fn: (incmatch(fn) and not excmatch(fn) and
473 (fn.endswith('/') or
474 (fn.endswith('/') or
474 (not pats and not files) or
475 (not pats and not files) or
475 (pats and patmatch(fn)) or
476 (pats and patmatch(fn)) or
476 (files and filematch(fn)))),
477 (files and filematch(fn)))),
477 (inc or exc or (pats and pats != [('glob', '**')])) and True)
478 (inc or exc or (pats and pats != [('glob', '**')])) and True)
478
479
479 def system(cmd, environ={}, cwd=None, onerr=None, errprefix=None):
480 def system(cmd, environ={}, cwd=None, onerr=None, errprefix=None):
480 '''enhanced shell command execution.
481 '''enhanced shell command execution.
481 run with environment maybe modified, maybe in different dir.
482 run with environment maybe modified, maybe in different dir.
482
483
483 if command fails and onerr is None, return status. if ui object,
484 if command fails and onerr is None, return status. if ui object,
484 print error message and return status, else raise onerr object as
485 print error message and return status, else raise onerr object as
485 exception.'''
486 exception.'''
486 def py2shell(val):
487 def py2shell(val):
487 'convert python object into string that is useful to shell'
488 'convert python object into string that is useful to shell'
488 if val in (None, False):
489 if val in (None, False):
489 return '0'
490 return '0'
490 if val == True:
491 if val == True:
491 return '1'
492 return '1'
492 return str(val)
493 return str(val)
493 oldenv = {}
494 oldenv = {}
494 for k in environ:
495 for k in environ:
495 oldenv[k] = os.environ.get(k)
496 oldenv[k] = os.environ.get(k)
496 if cwd is not None:
497 if cwd is not None:
497 oldcwd = os.getcwd()
498 oldcwd = os.getcwd()
498 try:
499 try:
499 for k, v in environ.iteritems():
500 for k, v in environ.iteritems():
500 os.environ[k] = py2shell(v)
501 os.environ[k] = py2shell(v)
501 if cwd is not None and oldcwd != cwd:
502 if cwd is not None and oldcwd != cwd:
502 os.chdir(cwd)
503 os.chdir(cwd)
503 rc = os.system(cmd)
504 rc = os.system(cmd)
504 if rc and onerr:
505 if rc and onerr:
505 errmsg = '%s %s' % (os.path.basename(cmd.split(None, 1)[0]),
506 errmsg = '%s %s' % (os.path.basename(cmd.split(None, 1)[0]),
506 explain_exit(rc)[0])
507 explain_exit(rc)[0])
507 if errprefix:
508 if errprefix:
508 errmsg = '%s: %s' % (errprefix, errmsg)
509 errmsg = '%s: %s' % (errprefix, errmsg)
509 try:
510 try:
510 onerr.warn(errmsg + '\n')
511 onerr.warn(errmsg + '\n')
511 except AttributeError:
512 except AttributeError:
512 raise onerr(errmsg)
513 raise onerr(errmsg)
513 return rc
514 return rc
514 finally:
515 finally:
515 for k, v in oldenv.iteritems():
516 for k, v in oldenv.iteritems():
516 if v is None:
517 if v is None:
517 del os.environ[k]
518 del os.environ[k]
518 else:
519 else:
519 os.environ[k] = v
520 os.environ[k] = v
520 if cwd is not None and oldcwd != cwd:
521 if cwd is not None and oldcwd != cwd:
521 os.chdir(oldcwd)
522 os.chdir(oldcwd)
522
523
523 def rename(src, dst):
524 def rename(src, dst):
524 """forcibly rename a file"""
525 """forcibly rename a file"""
525 try:
526 try:
526 os.rename(src, dst)
527 os.rename(src, dst)
527 except OSError, err:
528 except OSError, err:
528 # on windows, rename to existing file is not allowed, so we
529 # on windows, rename to existing file is not allowed, so we
529 # must delete destination first. but if file is open, unlink
530 # must delete destination first. but if file is open, unlink
530 # schedules it for delete but does not delete it. rename
531 # schedules it for delete but does not delete it. rename
531 # happens immediately even for open files, so we create
532 # happens immediately even for open files, so we create
532 # temporary file, delete it, rename destination to that name,
533 # temporary file, delete it, rename destination to that name,
533 # then delete that. then rename is safe to do.
534 # then delete that. then rename is safe to do.
534 fd, temp = tempfile.mkstemp(dir=os.path.dirname(dst) or '.')
535 fd, temp = tempfile.mkstemp(dir=os.path.dirname(dst) or '.')
535 os.close(fd)
536 os.close(fd)
536 os.unlink(temp)
537 os.unlink(temp)
537 os.rename(dst, temp)
538 os.rename(dst, temp)
538 os.unlink(temp)
539 os.unlink(temp)
539 os.rename(src, dst)
540 os.rename(src, dst)
540
541
541 def unlink(f):
542 def unlink(f):
542 """unlink and remove the directory if it is empty"""
543 """unlink and remove the directory if it is empty"""
543 os.unlink(f)
544 os.unlink(f)
544 # try removing directories that might now be empty
545 # try removing directories that might now be empty
545 try:
546 try:
546 os.removedirs(os.path.dirname(f))
547 os.removedirs(os.path.dirname(f))
547 except OSError:
548 except OSError:
548 pass
549 pass
549
550
550 def copyfile(src, dest):
551 def copyfile(src, dest):
551 "copy a file, preserving mode"
552 "copy a file, preserving mode"
552 try:
553 try:
553 shutil.copyfile(src, dest)
554 shutil.copyfile(src, dest)
554 shutil.copymode(src, dest)
555 shutil.copymode(src, dest)
555 except shutil.Error, inst:
556 except shutil.Error, inst:
556 raise util.Abort(str(inst))
557 raise util.Abort(str(inst))
557
558
558 def copyfiles(src, dst, hardlink=None):
559 def copyfiles(src, dst, hardlink=None):
559 """Copy a directory tree using hardlinks if possible"""
560 """Copy a directory tree using hardlinks if possible"""
560
561
561 if hardlink is None:
562 if hardlink is None:
562 hardlink = (os.stat(src).st_dev ==
563 hardlink = (os.stat(src).st_dev ==
563 os.stat(os.path.dirname(dst)).st_dev)
564 os.stat(os.path.dirname(dst)).st_dev)
564
565
565 if os.path.isdir(src):
566 if os.path.isdir(src):
566 os.mkdir(dst)
567 os.mkdir(dst)
567 for name in os.listdir(src):
568 for name in os.listdir(src):
568 srcname = os.path.join(src, name)
569 srcname = os.path.join(src, name)
569 dstname = os.path.join(dst, name)
570 dstname = os.path.join(dst, name)
570 copyfiles(srcname, dstname, hardlink)
571 copyfiles(srcname, dstname, hardlink)
571 else:
572 else:
572 if hardlink:
573 if hardlink:
573 try:
574 try:
574 os_link(src, dst)
575 os_link(src, dst)
575 except (IOError, OSError):
576 except (IOError, OSError):
576 hardlink = False
577 hardlink = False
577 shutil.copy(src, dst)
578 shutil.copy(src, dst)
578 else:
579 else:
579 shutil.copy(src, dst)
580 shutil.copy(src, dst)
580
581
581 def audit_path(path):
582 def audit_path(path):
582 """Abort if path contains dangerous components"""
583 """Abort if path contains dangerous components"""
583 parts = os.path.normcase(path).split(os.sep)
584 parts = os.path.normcase(path).split(os.sep)
584 if (os.path.splitdrive(path)[0] or parts[0] in ('.hg', '')
585 if (os.path.splitdrive(path)[0] or parts[0] in ('.hg', '')
585 or os.pardir in parts):
586 or os.pardir in parts):
586 raise Abort(_("path contains illegal component: %s\n") % path)
587 raise Abort(_("path contains illegal component: %s\n") % path)
587
588
588 def _makelock_file(info, pathname):
589 def _makelock_file(info, pathname):
589 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
590 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
590 os.write(ld, info)
591 os.write(ld, info)
591 os.close(ld)
592 os.close(ld)
592
593
593 def _readlock_file(pathname):
594 def _readlock_file(pathname):
594 return posixfile(pathname).read()
595 return posixfile(pathname).read()
595
596
596 def nlinks(pathname):
597 def nlinks(pathname):
597 """Return number of hardlinks for the given file."""
598 """Return number of hardlinks for the given file."""
598 return os.lstat(pathname).st_nlink
599 return os.lstat(pathname).st_nlink
599
600
600 if hasattr(os, 'link'):
601 if hasattr(os, 'link'):
601 os_link = os.link
602 os_link = os.link
602 else:
603 else:
603 def os_link(src, dst):
604 def os_link(src, dst):
604 raise OSError(0, _("Hardlinks not supported"))
605 raise OSError(0, _("Hardlinks not supported"))
605
606
606 def fstat(fp):
607 def fstat(fp):
607 '''stat file object that may not have fileno method.'''
608 '''stat file object that may not have fileno method.'''
608 try:
609 try:
609 return os.fstat(fp.fileno())
610 return os.fstat(fp.fileno())
610 except AttributeError:
611 except AttributeError:
611 return os.stat(fp.name)
612 return os.stat(fp.name)
612
613
613 posixfile = file
614 posixfile = file
614
615
615 def is_win_9x():
616 def is_win_9x():
616 '''return true if run on windows 95, 98 or me.'''
617 '''return true if run on windows 95, 98 or me.'''
617 try:
618 try:
618 return sys.getwindowsversion()[3] == 1
619 return sys.getwindowsversion()[3] == 1
619 except AttributeError:
620 except AttributeError:
620 return os.name == 'nt' and 'command' in os.environ.get('comspec', '')
621 return os.name == 'nt' and 'command' in os.environ.get('comspec', '')
621
622
622 getuser_fallback = None
623 getuser_fallback = None
623
624
624 def getuser():
625 def getuser():
625 '''return name of current user'''
626 '''return name of current user'''
626 try:
627 try:
627 return getpass.getuser()
628 return getpass.getuser()
628 except ImportError:
629 except ImportError:
629 # import of pwd will fail on windows - try fallback
630 # import of pwd will fail on windows - try fallback
630 if getuser_fallback:
631 if getuser_fallback:
631 return getuser_fallback()
632 return getuser_fallback()
632 # raised if win32api not available
633 # raised if win32api not available
633 raise Abort(_('user name not available - set USERNAME '
634 raise Abort(_('user name not available - set USERNAME '
634 'environment variable'))
635 'environment variable'))
635
636
636 def username(uid=None):
637 def username(uid=None):
637 """Return the name of the user with the given uid.
638 """Return the name of the user with the given uid.
638
639
639 If uid is None, return the name of the current user."""
640 If uid is None, return the name of the current user."""
640 try:
641 try:
641 import pwd
642 import pwd
642 if uid is None:
643 if uid is None:
643 uid = os.getuid()
644 uid = os.getuid()
644 try:
645 try:
645 return pwd.getpwuid(uid)[0]
646 return pwd.getpwuid(uid)[0]
646 except KeyError:
647 except KeyError:
647 return str(uid)
648 return str(uid)
648 except ImportError:
649 except ImportError:
649 return None
650 return None
650
651
651 def groupname(gid=None):
652 def groupname(gid=None):
652 """Return the name of the group with the given gid.
653 """Return the name of the group with the given gid.
653
654
654 If gid is None, return the name of the current group."""
655 If gid is None, return the name of the current group."""
655 try:
656 try:
656 import grp
657 import grp
657 if gid is None:
658 if gid is None:
658 gid = os.getgid()
659 gid = os.getgid()
659 try:
660 try:
660 return grp.getgrgid(gid)[0]
661 return grp.getgrgid(gid)[0]
661 except KeyError:
662 except KeyError:
662 return str(gid)
663 return str(gid)
663 except ImportError:
664 except ImportError:
664 return None
665 return None
665
666
666 # File system features
667 # File system features
667
668
668 def checkfolding(path):
669 def checkfolding(path):
669 """
670 """
670 Check whether the given path is on a case-sensitive filesystem
671 Check whether the given path is on a case-sensitive filesystem
671
672
672 Requires a path (like /foo/.hg) ending with a foldable final
673 Requires a path (like /foo/.hg) ending with a foldable final
673 directory component.
674 directory component.
674 """
675 """
675 s1 = os.stat(path)
676 s1 = os.stat(path)
676 d, b = os.path.split(path)
677 d, b = os.path.split(path)
677 p2 = os.path.join(d, b.upper())
678 p2 = os.path.join(d, b.upper())
678 if path == p2:
679 if path == p2:
679 p2 = os.path.join(d, b.lower())
680 p2 = os.path.join(d, b.lower())
680 try:
681 try:
681 s2 = os.stat(p2)
682 s2 = os.stat(p2)
682 if s2 == s1:
683 if s2 == s1:
683 return False
684 return False
684 return True
685 return True
685 except:
686 except:
686 return True
687 return True
687
688
688 # Platform specific variants
689 # Platform specific variants
689 if os.name == 'nt':
690 if os.name == 'nt':
690 demandload(globals(), "msvcrt")
691 demandload(globals(), "msvcrt")
691 nulldev = 'NUL:'
692 nulldev = 'NUL:'
692
693
693 class winstdout:
694 class winstdout:
694 '''stdout on windows misbehaves if sent through a pipe'''
695 '''stdout on windows misbehaves if sent through a pipe'''
695
696
696 def __init__(self, fp):
697 def __init__(self, fp):
697 self.fp = fp
698 self.fp = fp
698
699
699 def __getattr__(self, key):
700 def __getattr__(self, key):
700 return getattr(self.fp, key)
701 return getattr(self.fp, key)
701
702
702 def close(self):
703 def close(self):
703 try:
704 try:
704 self.fp.close()
705 self.fp.close()
705 except: pass
706 except: pass
706
707
707 def write(self, s):
708 def write(self, s):
708 try:
709 try:
709 return self.fp.write(s)
710 return self.fp.write(s)
710 except IOError, inst:
711 except IOError, inst:
711 if inst.errno != 0: raise
712 if inst.errno != 0: raise
712 self.close()
713 self.close()
713 raise IOError(errno.EPIPE, 'Broken pipe')
714 raise IOError(errno.EPIPE, 'Broken pipe')
714
715
715 sys.stdout = winstdout(sys.stdout)
716 sys.stdout = winstdout(sys.stdout)
716
717
717 def system_rcpath():
718 def system_rcpath():
718 try:
719 try:
719 return system_rcpath_win32()
720 return system_rcpath_win32()
720 except:
721 except:
721 return [r'c:\mercurial\mercurial.ini']
722 return [r'c:\mercurial\mercurial.ini']
722
723
723 def os_rcpath():
724 def os_rcpath():
724 '''return default os-specific hgrc search path'''
725 '''return default os-specific hgrc search path'''
725 path = system_rcpath()
726 path = system_rcpath()
726 path.append(user_rcpath())
727 path.append(user_rcpath())
727 userprofile = os.environ.get('USERPROFILE')
728 userprofile = os.environ.get('USERPROFILE')
728 if userprofile:
729 if userprofile:
729 path.append(os.path.join(userprofile, 'mercurial.ini'))
730 path.append(os.path.join(userprofile, 'mercurial.ini'))
730 return path
731 return path
731
732
732 def user_rcpath():
733 def user_rcpath():
733 '''return os-specific hgrc search path to the user dir'''
734 '''return os-specific hgrc search path to the user dir'''
734 return os.path.join(os.path.expanduser('~'), 'mercurial.ini')
735 return os.path.join(os.path.expanduser('~'), 'mercurial.ini')
735
736
736 def parse_patch_output(output_line):
737 def parse_patch_output(output_line):
737 """parses the output produced by patch and returns the file name"""
738 """parses the output produced by patch and returns the file name"""
738 pf = output_line[14:]
739 pf = output_line[14:]
739 if pf[0] == '`':
740 if pf[0] == '`':
740 pf = pf[1:-1] # Remove the quotes
741 pf = pf[1:-1] # Remove the quotes
741 return pf
742 return pf
742
743
743 def testpid(pid):
744 def testpid(pid):
744 '''return False if pid dead, True if running or not known'''
745 '''return False if pid dead, True if running or not known'''
745 return True
746 return True
746
747
747 def is_exec(f, last):
748 def is_exec(f, last):
748 return last
749 return last
749
750
750 def set_exec(f, mode):
751 def set_exec(f, mode):
751 pass
752 pass
752
753
753 def set_binary(fd):
754 def set_binary(fd):
754 msvcrt.setmode(fd.fileno(), os.O_BINARY)
755 msvcrt.setmode(fd.fileno(), os.O_BINARY)
755
756
756 def pconvert(path):
757 def pconvert(path):
757 return path.replace("\\", "/")
758 return path.replace("\\", "/")
758
759
759 def localpath(path):
760 def localpath(path):
760 return path.replace('/', '\\')
761 return path.replace('/', '\\')
761
762
762 def normpath(path):
763 def normpath(path):
763 return pconvert(os.path.normpath(path))
764 return pconvert(os.path.normpath(path))
764
765
765 makelock = _makelock_file
766 makelock = _makelock_file
766 readlock = _readlock_file
767 readlock = _readlock_file
767
768
768 def samestat(s1, s2):
769 def samestat(s1, s2):
769 return False
770 return False
770
771
771 def shellquote(s):
772 def shellquote(s):
772 return '"%s"' % s.replace('"', '\\"')
773 return '"%s"' % s.replace('"', '\\"')
773
774
774 def explain_exit(code):
775 def explain_exit(code):
775 return _("exited with status %d") % code, code
776 return _("exited with status %d") % code, code
776
777
777 # if you change this stub into a real check, please try to implement the
778 # if you change this stub into a real check, please try to implement the
778 # username and groupname functions above, too.
779 # username and groupname functions above, too.
779 def isowner(fp, st=None):
780 def isowner(fp, st=None):
780 return True
781 return True
781
782
782 try:
783 try:
783 # override functions with win32 versions if possible
784 # override functions with win32 versions if possible
784 from util_win32 import *
785 from util_win32 import *
785 if not is_win_9x():
786 if not is_win_9x():
786 posixfile = posixfile_nt
787 posixfile = posixfile_nt
787 except ImportError:
788 except ImportError:
788 pass
789 pass
789
790
790 else:
791 else:
791 nulldev = '/dev/null'
792 nulldev = '/dev/null'
792
793
793 def rcfiles(path):
794 def rcfiles(path):
794 rcs = [os.path.join(path, 'hgrc')]
795 rcs = [os.path.join(path, 'hgrc')]
795 rcdir = os.path.join(path, 'hgrc.d')
796 rcdir = os.path.join(path, 'hgrc.d')
796 try:
797 try:
797 rcs.extend([os.path.join(rcdir, f) for f in os.listdir(rcdir)
798 rcs.extend([os.path.join(rcdir, f) for f in os.listdir(rcdir)
798 if f.endswith(".rc")])
799 if f.endswith(".rc")])
799 except OSError:
800 except OSError:
800 pass
801 pass
801 return rcs
802 return rcs
802
803
803 def os_rcpath():
804 def os_rcpath():
804 '''return default os-specific hgrc search path'''
805 '''return default os-specific hgrc search path'''
805 path = []
806 path = []
806 # old mod_python does not set sys.argv
807 # old mod_python does not set sys.argv
807 if len(getattr(sys, 'argv', [])) > 0:
808 if len(getattr(sys, 'argv', [])) > 0:
808 path.extend(rcfiles(os.path.dirname(sys.argv[0]) +
809 path.extend(rcfiles(os.path.dirname(sys.argv[0]) +
809 '/../etc/mercurial'))
810 '/../etc/mercurial'))
810 path.extend(rcfiles('/etc/mercurial'))
811 path.extend(rcfiles('/etc/mercurial'))
811 path.append(os.path.expanduser('~/.hgrc'))
812 path.append(os.path.expanduser('~/.hgrc'))
812 path = [os.path.normpath(f) for f in path]
813 path = [os.path.normpath(f) for f in path]
813 return path
814 return path
814
815
815 def parse_patch_output(output_line):
816 def parse_patch_output(output_line):
816 """parses the output produced by patch and returns the file name"""
817 """parses the output produced by patch and returns the file name"""
817 pf = output_line[14:]
818 pf = output_line[14:]
818 if pf.startswith("'") and pf.endswith("'") and " " in pf:
819 if pf.startswith("'") and pf.endswith("'") and " " in pf:
819 pf = pf[1:-1] # Remove the quotes
820 pf = pf[1:-1] # Remove the quotes
820 return pf
821 return pf
821
822
822 def is_exec(f, last):
823 def is_exec(f, last):
823 """check whether a file is executable"""
824 """check whether a file is executable"""
824 return (os.lstat(f).st_mode & 0100 != 0)
825 return (os.lstat(f).st_mode & 0100 != 0)
825
826
826 def set_exec(f, mode):
827 def set_exec(f, mode):
827 s = os.lstat(f).st_mode
828 s = os.lstat(f).st_mode
828 if (s & 0100 != 0) == mode:
829 if (s & 0100 != 0) == mode:
829 return
830 return
830 if mode:
831 if mode:
831 # Turn on +x for every +r bit when making a file executable
832 # Turn on +x for every +r bit when making a file executable
832 # and obey umask.
833 # and obey umask.
833 umask = os.umask(0)
834 umask = os.umask(0)
834 os.umask(umask)
835 os.umask(umask)
835 os.chmod(f, s | (s & 0444) >> 2 & ~umask)
836 os.chmod(f, s | (s & 0444) >> 2 & ~umask)
836 else:
837 else:
837 os.chmod(f, s & 0666)
838 os.chmod(f, s & 0666)
838
839
839 def set_binary(fd):
840 def set_binary(fd):
840 pass
841 pass
841
842
842 def pconvert(path):
843 def pconvert(path):
843 return path
844 return path
844
845
845 def localpath(path):
846 def localpath(path):
846 return path
847 return path
847
848
848 normpath = os.path.normpath
849 normpath = os.path.normpath
849 samestat = os.path.samestat
850 samestat = os.path.samestat
850
851
851 def makelock(info, pathname):
852 def makelock(info, pathname):
852 try:
853 try:
853 os.symlink(info, pathname)
854 os.symlink(info, pathname)
854 except OSError, why:
855 except OSError, why:
855 if why.errno == errno.EEXIST:
856 if why.errno == errno.EEXIST:
856 raise
857 raise
857 else:
858 else:
858 _makelock_file(info, pathname)
859 _makelock_file(info, pathname)
859
860
860 def readlock(pathname):
861 def readlock(pathname):
861 try:
862 try:
862 return os.readlink(pathname)
863 return os.readlink(pathname)
863 except OSError, why:
864 except OSError, why:
864 if why.errno == errno.EINVAL:
865 if why.errno == errno.EINVAL:
865 return _readlock_file(pathname)
866 return _readlock_file(pathname)
866 else:
867 else:
867 raise
868 raise
868
869
869 def shellquote(s):
870 def shellquote(s):
870 return "'%s'" % s.replace("'", "'\\''")
871 return "'%s'" % s.replace("'", "'\\''")
871
872
872 def testpid(pid):
873 def testpid(pid):
873 '''return False if pid dead, True if running or not sure'''
874 '''return False if pid dead, True if running or not sure'''
874 try:
875 try:
875 os.kill(pid, 0)
876 os.kill(pid, 0)
876 return True
877 return True
877 except OSError, inst:
878 except OSError, inst:
878 return inst.errno != errno.ESRCH
879 return inst.errno != errno.ESRCH
879
880
880 def explain_exit(code):
881 def explain_exit(code):
881 """return a 2-tuple (desc, code) describing a process's status"""
882 """return a 2-tuple (desc, code) describing a process's status"""
882 if os.WIFEXITED(code):
883 if os.WIFEXITED(code):
883 val = os.WEXITSTATUS(code)
884 val = os.WEXITSTATUS(code)
884 return _("exited with status %d") % val, val
885 return _("exited with status %d") % val, val
885 elif os.WIFSIGNALED(code):
886 elif os.WIFSIGNALED(code):
886 val = os.WTERMSIG(code)
887 val = os.WTERMSIG(code)
887 return _("killed by signal %d") % val, val
888 return _("killed by signal %d") % val, val
888 elif os.WIFSTOPPED(code):
889 elif os.WIFSTOPPED(code):
889 val = os.WSTOPSIG(code)
890 val = os.WSTOPSIG(code)
890 return _("stopped by signal %d") % val, val
891 return _("stopped by signal %d") % val, val
891 raise ValueError(_("invalid exit code"))
892 raise ValueError(_("invalid exit code"))
892
893
893 def isowner(fp, st=None):
894 def isowner(fp, st=None):
894 """Return True if the file object f belongs to the current user.
895 """Return True if the file object f belongs to the current user.
895
896
896 The return value of a util.fstat(f) may be passed as the st argument.
897 The return value of a util.fstat(f) may be passed as the st argument.
897 """
898 """
898 if st is None:
899 if st is None:
899 st = fstat(f)
900 st = fstat(f)
900 return st.st_uid == os.getuid()
901 return st.st_uid == os.getuid()
901
902
902
903
903 def opener(base, audit=True):
904 def opener(base, audit=True):
904 """
905 """
905 return a function that opens files relative to base
906 return a function that opens files relative to base
906
907
907 this function is used to hide the details of COW semantics and
908 this function is used to hide the details of COW semantics and
908 remote file access from higher level code.
909 remote file access from higher level code.
909 """
910 """
910 p = base
911 p = base
911 audit_p = audit
912 audit_p = audit
912
913
913 def mktempcopy(name):
914 def mktempcopy(name):
914 d, fn = os.path.split(name)
915 d, fn = os.path.split(name)
915 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
916 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
916 os.close(fd)
917 os.close(fd)
917 ofp = posixfile(temp, "wb")
918 ofp = posixfile(temp, "wb")
918 try:
919 try:
919 try:
920 try:
920 ifp = posixfile(name, "rb")
921 ifp = posixfile(name, "rb")
921 except IOError, inst:
922 except IOError, inst:
922 if not getattr(inst, 'filename', None):
923 if not getattr(inst, 'filename', None):
923 inst.filename = name
924 inst.filename = name
924 raise
925 raise
925 for chunk in filechunkiter(ifp):
926 for chunk in filechunkiter(ifp):
926 ofp.write(chunk)
927 ofp.write(chunk)
927 ifp.close()
928 ifp.close()
928 ofp.close()
929 ofp.close()
929 except:
930 except:
930 try: os.unlink(temp)
931 try: os.unlink(temp)
931 except: pass
932 except: pass
932 raise
933 raise
933 st = os.lstat(name)
934 st = os.lstat(name)
934 os.chmod(temp, st.st_mode)
935 os.chmod(temp, st.st_mode)
935 return temp
936 return temp
936
937
937 class atomictempfile(posixfile):
938 class atomictempfile(posixfile):
938 """the file will only be copied when rename is called"""
939 """the file will only be copied when rename is called"""
939 def __init__(self, name, mode):
940 def __init__(self, name, mode):
940 self.__name = name
941 self.__name = name
941 self.temp = mktempcopy(name)
942 self.temp = mktempcopy(name)
942 posixfile.__init__(self, self.temp, mode)
943 posixfile.__init__(self, self.temp, mode)
943 def rename(self):
944 def rename(self):
944 if not self.closed:
945 if not self.closed:
945 posixfile.close(self)
946 posixfile.close(self)
946 rename(self.temp, localpath(self.__name))
947 rename(self.temp, localpath(self.__name))
947 def __del__(self):
948 def __del__(self):
948 if not self.closed:
949 if not self.closed:
949 try:
950 try:
950 os.unlink(self.temp)
951 os.unlink(self.temp)
951 except: pass
952 except: pass
952 posixfile.close(self)
953 posixfile.close(self)
953
954
954 class atomicfile(atomictempfile):
955 class atomicfile(atomictempfile):
955 """the file will only be copied on close"""
956 """the file will only be copied on close"""
956 def __init__(self, name, mode):
957 def __init__(self, name, mode):
957 atomictempfile.__init__(self, name, mode)
958 atomictempfile.__init__(self, name, mode)
958 def close(self):
959 def close(self):
959 self.rename()
960 self.rename()
960 def __del__(self):
961 def __del__(self):
961 self.rename()
962 self.rename()
962
963
963 def o(path, mode="r", text=False, atomic=False, atomictemp=False):
964 def o(path, mode="r", text=False, atomic=False, atomictemp=False):
964 if audit_p:
965 if audit_p:
965 audit_path(path)
966 audit_path(path)
966 f = os.path.join(p, path)
967 f = os.path.join(p, path)
967
968
968 if not text:
969 if not text:
969 mode += "b" # for that other OS
970 mode += "b" # for that other OS
970
971
971 if mode[0] != "r":
972 if mode[0] != "r":
972 try:
973 try:
973 nlink = nlinks(f)
974 nlink = nlinks(f)
974 except OSError:
975 except OSError:
975 d = os.path.dirname(f)
976 d = os.path.dirname(f)
976 if not os.path.isdir(d):
977 if not os.path.isdir(d):
977 os.makedirs(d)
978 os.makedirs(d)
978 else:
979 else:
979 if atomic:
980 if atomic:
980 return atomicfile(f, mode)
981 return atomicfile(f, mode)
981 elif atomictemp:
982 elif atomictemp:
982 return atomictempfile(f, mode)
983 return atomictempfile(f, mode)
983 if nlink > 1:
984 if nlink > 1:
984 rename(mktempcopy(f), f)
985 rename(mktempcopy(f), f)
985 return posixfile(f, mode)
986 return posixfile(f, mode)
986
987
987 return o
988 return o
988
989
989 class chunkbuffer(object):
990 class chunkbuffer(object):
990 """Allow arbitrary sized chunks of data to be efficiently read from an
991 """Allow arbitrary sized chunks of data to be efficiently read from an
991 iterator over chunks of arbitrary size."""
992 iterator over chunks of arbitrary size."""
992
993
993 def __init__(self, in_iter, targetsize = 2**16):
994 def __init__(self, in_iter, targetsize = 2**16):
994 """in_iter is the iterator that's iterating over the input chunks.
995 """in_iter is the iterator that's iterating over the input chunks.
995 targetsize is how big a buffer to try to maintain."""
996 targetsize is how big a buffer to try to maintain."""
996 self.in_iter = iter(in_iter)
997 self.in_iter = iter(in_iter)
997 self.buf = ''
998 self.buf = ''
998 self.targetsize = int(targetsize)
999 self.targetsize = int(targetsize)
999 if self.targetsize <= 0:
1000 if self.targetsize <= 0:
1000 raise ValueError(_("targetsize must be greater than 0, was %d") %
1001 raise ValueError(_("targetsize must be greater than 0, was %d") %
1001 targetsize)
1002 targetsize)
1002 self.iterempty = False
1003 self.iterempty = False
1003
1004
1004 def fillbuf(self):
1005 def fillbuf(self):
1005 """Ignore target size; read every chunk from iterator until empty."""
1006 """Ignore target size; read every chunk from iterator until empty."""
1006 if not self.iterempty:
1007 if not self.iterempty:
1007 collector = cStringIO.StringIO()
1008 collector = cStringIO.StringIO()
1008 collector.write(self.buf)
1009 collector.write(self.buf)
1009 for ch in self.in_iter:
1010 for ch in self.in_iter:
1010 collector.write(ch)
1011 collector.write(ch)
1011 self.buf = collector.getvalue()
1012 self.buf = collector.getvalue()
1012 self.iterempty = True
1013 self.iterempty = True
1013
1014
1014 def read(self, l):
1015 def read(self, l):
1015 """Read L bytes of data from the iterator of chunks of data.
1016 """Read L bytes of data from the iterator of chunks of data.
1016 Returns less than L bytes if the iterator runs dry."""
1017 Returns less than L bytes if the iterator runs dry."""
1017 if l > len(self.buf) and not self.iterempty:
1018 if l > len(self.buf) and not self.iterempty:
1018 # Clamp to a multiple of self.targetsize
1019 # Clamp to a multiple of self.targetsize
1019 targetsize = self.targetsize * ((l // self.targetsize) + 1)
1020 targetsize = self.targetsize * ((l // self.targetsize) + 1)
1020 collector = cStringIO.StringIO()
1021 collector = cStringIO.StringIO()
1021 collector.write(self.buf)
1022 collector.write(self.buf)
1022 collected = len(self.buf)
1023 collected = len(self.buf)
1023 for chunk in self.in_iter:
1024 for chunk in self.in_iter:
1024 collector.write(chunk)
1025 collector.write(chunk)
1025 collected += len(chunk)
1026 collected += len(chunk)
1026 if collected >= targetsize:
1027 if collected >= targetsize:
1027 break
1028 break
1028 if collected < targetsize:
1029 if collected < targetsize:
1029 self.iterempty = True
1030 self.iterempty = True
1030 self.buf = collector.getvalue()
1031 self.buf = collector.getvalue()
1031 s, self.buf = self.buf[:l], buffer(self.buf, l)
1032 s, self.buf = self.buf[:l], buffer(self.buf, l)
1032 return s
1033 return s
1033
1034
1034 def filechunkiter(f, size=65536, limit=None):
1035 def filechunkiter(f, size=65536, limit=None):
1035 """Create a generator that produces the data in the file size
1036 """Create a generator that produces the data in the file size
1036 (default 65536) bytes at a time, up to optional limit (default is
1037 (default 65536) bytes at a time, up to optional limit (default is
1037 to read all data). Chunks may be less than size bytes if the
1038 to read all data). Chunks may be less than size bytes if the
1038 chunk is the last chunk in the file, or the file is a socket or
1039 chunk is the last chunk in the file, or the file is a socket or
1039 some other type of file that sometimes reads less data than is
1040 some other type of file that sometimes reads less data than is
1040 requested."""
1041 requested."""
1041 assert size >= 0
1042 assert size >= 0
1042 assert limit is None or limit >= 0
1043 assert limit is None or limit >= 0
1043 while True:
1044 while True:
1044 if limit is None: nbytes = size
1045 if limit is None: nbytes = size
1045 else: nbytes = min(limit, size)
1046 else: nbytes = min(limit, size)
1046 s = nbytes and f.read(nbytes)
1047 s = nbytes and f.read(nbytes)
1047 if not s: break
1048 if not s: break
1048 if limit: limit -= len(s)
1049 if limit: limit -= len(s)
1049 yield s
1050 yield s
1050
1051
1051 def makedate():
1052 def makedate():
1052 lt = time.localtime()
1053 lt = time.localtime()
1053 if lt[8] == 1 and time.daylight:
1054 if lt[8] == 1 and time.daylight:
1054 tz = time.altzone
1055 tz = time.altzone
1055 else:
1056 else:
1056 tz = time.timezone
1057 tz = time.timezone
1057 return time.mktime(lt), tz
1058 return time.mktime(lt), tz
1058
1059
1059 def datestr(date=None, format='%a %b %d %H:%M:%S %Y', timezone=True):
1060 def datestr(date=None, format='%a %b %d %H:%M:%S %Y', timezone=True):
1060 """represent a (unixtime, offset) tuple as a localized time.
1061 """represent a (unixtime, offset) tuple as a localized time.
1061 unixtime is seconds since the epoch, and offset is the time zone's
1062 unixtime is seconds since the epoch, and offset is the time zone's
1062 number of seconds away from UTC. if timezone is false, do not
1063 number of seconds away from UTC. if timezone is false, do not
1063 append time zone to string."""
1064 append time zone to string."""
1064 t, tz = date or makedate()
1065 t, tz = date or makedate()
1065 s = time.strftime(format, time.gmtime(float(t) - tz))
1066 s = time.strftime(format, time.gmtime(float(t) - tz))
1066 if timezone:
1067 if timezone:
1067 s += " %+03d%02d" % (-tz / 3600, ((-tz % 3600) / 60))
1068 s += " %+03d%02d" % (-tz / 3600, ((-tz % 3600) / 60))
1068 return s
1069 return s
1069
1070
1070 def strdate(string, format, defaults):
1071 def strdate(string, format, defaults):
1071 """parse a localized time string and return a (unixtime, offset) tuple.
1072 """parse a localized time string and return a (unixtime, offset) tuple.
1072 if the string cannot be parsed, ValueError is raised."""
1073 if the string cannot be parsed, ValueError is raised."""
1073 def timezone(string):
1074 def timezone(string):
1074 tz = string.split()[-1]
1075 tz = string.split()[-1]
1075 if tz[0] in "+-" and len(tz) == 5 and tz[1:].isdigit():
1076 if tz[0] in "+-" and len(tz) == 5 and tz[1:].isdigit():
1076 tz = int(tz)
1077 tz = int(tz)
1077 offset = - 3600 * (tz / 100) - 60 * (tz % 100)
1078 offset = - 3600 * (tz / 100) - 60 * (tz % 100)
1078 return offset
1079 return offset
1079 if tz == "GMT" or tz == "UTC":
1080 if tz == "GMT" or tz == "UTC":
1080 return 0
1081 return 0
1081 return None
1082 return None
1082
1083
1083 # NOTE: unixtime = localunixtime + offset
1084 # NOTE: unixtime = localunixtime + offset
1084 offset, date = timezone(string), string
1085 offset, date = timezone(string), string
1085 if offset != None:
1086 if offset != None:
1086 date = " ".join(string.split()[:-1])
1087 date = " ".join(string.split()[:-1])
1087
1088
1088 # add missing elements from defaults
1089 # add missing elements from defaults
1089 for part in defaults:
1090 for part in defaults:
1090 found = [True for p in part if ("%"+p) in format]
1091 found = [True for p in part if ("%"+p) in format]
1091 if not found:
1092 if not found:
1092 date += "@" + defaults[part]
1093 date += "@" + defaults[part]
1093 format += "@%" + part[0]
1094 format += "@%" + part[0]
1094
1095
1095 timetuple = time.strptime(date, format)
1096 timetuple = time.strptime(date, format)
1096 localunixtime = int(calendar.timegm(timetuple))
1097 localunixtime = int(calendar.timegm(timetuple))
1097 if offset is None:
1098 if offset is None:
1098 # local timezone
1099 # local timezone
1099 unixtime = int(time.mktime(timetuple))
1100 unixtime = int(time.mktime(timetuple))
1100 offset = unixtime - localunixtime
1101 offset = unixtime - localunixtime
1101 else:
1102 else:
1102 unixtime = localunixtime + offset
1103 unixtime = localunixtime + offset
1103 return unixtime, offset
1104 return unixtime, offset
1104
1105
1105 def parsedate(string, formats=None, defaults=None):
1106 def parsedate(string, formats=None, defaults=None):
1106 """parse a localized time string and return a (unixtime, offset) tuple.
1107 """parse a localized time string and return a (unixtime, offset) tuple.
1107 The date may be a "unixtime offset" string or in one of the specified
1108 The date may be a "unixtime offset" string or in one of the specified
1108 formats."""
1109 formats."""
1109 if not string:
1110 if not string:
1110 return 0, 0
1111 return 0, 0
1111 if not formats:
1112 if not formats:
1112 formats = defaultdateformats
1113 formats = defaultdateformats
1113 string = string.strip()
1114 string = string.strip()
1114 try:
1115 try:
1115 when, offset = map(int, string.split(' '))
1116 when, offset = map(int, string.split(' '))
1116 except ValueError:
1117 except ValueError:
1117 # fill out defaults
1118 # fill out defaults
1118 if not defaults:
1119 if not defaults:
1119 defaults = {}
1120 defaults = {}
1120 now = makedate()
1121 now = makedate()
1121 for part in "d mb yY HI M S".split():
1122 for part in "d mb yY HI M S".split():
1122 if part not in defaults:
1123 if part not in defaults:
1123 if part[0] in "HMS":
1124 if part[0] in "HMS":
1124 defaults[part] = "00"
1125 defaults[part] = "00"
1125 elif part[0] in "dm":
1126 elif part[0] in "dm":
1126 defaults[part] = "1"
1127 defaults[part] = "1"
1127 else:
1128 else:
1128 defaults[part] = datestr(now, "%" + part[0], False)
1129 defaults[part] = datestr(now, "%" + part[0], False)
1129
1130
1130 for format in formats:
1131 for format in formats:
1131 try:
1132 try:
1132 when, offset = strdate(string, format, defaults)
1133 when, offset = strdate(string, format, defaults)
1133 except ValueError:
1134 except ValueError:
1134 pass
1135 pass
1135 else:
1136 else:
1136 break
1137 break
1137 else:
1138 else:
1138 raise Abort(_('invalid date: %r ') % string)
1139 raise Abort(_('invalid date: %r ') % string)
1139 # validate explicit (probably user-specified) date and
1140 # validate explicit (probably user-specified) date and
1140 # time zone offset. values must fit in signed 32 bits for
1141 # time zone offset. values must fit in signed 32 bits for
1141 # current 32-bit linux runtimes. timezones go from UTC-12
1142 # current 32-bit linux runtimes. timezones go from UTC-12
1142 # to UTC+14
1143 # to UTC+14
1143 if abs(when) > 0x7fffffff:
1144 if abs(when) > 0x7fffffff:
1144 raise Abort(_('date exceeds 32 bits: %d') % when)
1145 raise Abort(_('date exceeds 32 bits: %d') % when)
1145 if offset < -50400 or offset > 43200:
1146 if offset < -50400 or offset > 43200:
1146 raise Abort(_('impossible time zone offset: %d') % offset)
1147 raise Abort(_('impossible time zone offset: %d') % offset)
1147 return when, offset
1148 return when, offset
1148
1149
1149 def matchdate(date):
1150 def matchdate(date):
1150 """Return a function that matches a given date match specifier
1151 """Return a function that matches a given date match specifier
1151
1152
1152 Formats include:
1153 Formats include:
1153
1154
1154 '{date}' match a given date to the accuracy provided
1155 '{date}' match a given date to the accuracy provided
1155
1156
1156 '<{date}' on or before a given date
1157 '<{date}' on or before a given date
1157
1158
1158 '>{date}' on or after a given date
1159 '>{date}' on or after a given date
1159
1160
1160 """
1161 """
1161
1162
1162 def lower(date):
1163 def lower(date):
1163 return parsedate(date, extendeddateformats)[0]
1164 return parsedate(date, extendeddateformats)[0]
1164
1165
1165 def upper(date):
1166 def upper(date):
1166 d = dict(mb="12", HI="23", M="59", S="59")
1167 d = dict(mb="12", HI="23", M="59", S="59")
1167 for days in "31 30 29".split():
1168 for days in "31 30 29".split():
1168 try:
1169 try:
1169 d["d"] = days
1170 d["d"] = days
1170 return parsedate(date, extendeddateformats, d)[0]
1171 return parsedate(date, extendeddateformats, d)[0]
1171 except:
1172 except:
1172 pass
1173 pass
1173 d["d"] = "28"
1174 d["d"] = "28"
1174 return parsedate(date, extendeddateformats, d)[0]
1175 return parsedate(date, extendeddateformats, d)[0]
1175
1176
1176 if date[0] == "<":
1177 if date[0] == "<":
1177 when = upper(date[1:])
1178 when = upper(date[1:])
1178 return lambda x: x <= when
1179 return lambda x: x <= when
1179 elif date[0] == ">":
1180 elif date[0] == ">":
1180 when = lower(date[1:])
1181 when = lower(date[1:])
1181 return lambda x: x >= when
1182 return lambda x: x >= when
1182 elif date[0] == "-":
1183 elif date[0] == "-":
1183 try:
1184 try:
1184 days = int(date[1:])
1185 days = int(date[1:])
1185 except ValueError:
1186 except ValueError:
1186 raise Abort(_("invalid day spec: %s") % date[1:])
1187 raise Abort(_("invalid day spec: %s") % date[1:])
1187 when = makedate()[0] - days * 3600 * 24
1188 when = makedate()[0] - days * 3600 * 24
1188 return lambda x: x >= when
1189 return lambda x: x >= when
1189 elif " to " in date:
1190 elif " to " in date:
1190 a, b = date.split(" to ")
1191 a, b = date.split(" to ")
1191 start, stop = lower(a), upper(b)
1192 start, stop = lower(a), upper(b)
1192 return lambda x: x >= start and x <= stop
1193 return lambda x: x >= start and x <= stop
1193 else:
1194 else:
1194 start, stop = lower(date), upper(date)
1195 start, stop = lower(date), upper(date)
1195 return lambda x: x >= start and x <= stop
1196 return lambda x: x >= start and x <= stop
1196
1197
1197 def shortuser(user):
1198 def shortuser(user):
1198 """Return a short representation of a user name or email address."""
1199 """Return a short representation of a user name or email address."""
1199 f = user.find('@')
1200 f = user.find('@')
1200 if f >= 0:
1201 if f >= 0:
1201 user = user[:f]
1202 user = user[:f]
1202 f = user.find('<')
1203 f = user.find('<')
1203 if f >= 0:
1204 if f >= 0:
1204 user = user[f+1:]
1205 user = user[f+1:]
1205 f = user.find(' ')
1206 f = user.find(' ')
1206 if f >= 0:
1207 if f >= 0:
1207 user = user[:f]
1208 user = user[:f]
1208 f = user.find('.')
1209 f = user.find('.')
1209 if f >= 0:
1210 if f >= 0:
1210 user = user[:f]
1211 user = user[:f]
1211 return user
1212 return user
1212
1213
1213 def ellipsis(text, maxlength=400):
1214 def ellipsis(text, maxlength=400):
1214 """Trim string to at most maxlength (default: 400) characters."""
1215 """Trim string to at most maxlength (default: 400) characters."""
1215 if len(text) <= maxlength:
1216 if len(text) <= maxlength:
1216 return text
1217 return text
1217 else:
1218 else:
1218 return "%s..." % (text[:maxlength-3])
1219 return "%s..." % (text[:maxlength-3])
1219
1220
1220 def walkrepos(path):
1221 def walkrepos(path):
1221 '''yield every hg repository under path, recursively.'''
1222 '''yield every hg repository under path, recursively.'''
1222 def errhandler(err):
1223 def errhandler(err):
1223 if err.filename == path:
1224 if err.filename == path:
1224 raise err
1225 raise err
1225
1226
1226 for root, dirs, files in os.walk(path, onerror=errhandler):
1227 for root, dirs, files in os.walk(path, onerror=errhandler):
1227 for d in dirs:
1228 for d in dirs:
1228 if d == '.hg':
1229 if d == '.hg':
1229 yield root
1230 yield root
1230 dirs[:] = []
1231 dirs[:] = []
1231 break
1232 break
1232
1233
1233 _rcpath = None
1234 _rcpath = None
1234
1235
1235 def rcpath():
1236 def rcpath():
1236 '''return hgrc search path. if env var HGRCPATH is set, use it.
1237 '''return hgrc search path. if env var HGRCPATH is set, use it.
1237 for each item in path, if directory, use files ending in .rc,
1238 for each item in path, if directory, use files ending in .rc,
1238 else use item.
1239 else use item.
1239 make HGRCPATH empty to only look in .hg/hgrc of current repo.
1240 make HGRCPATH empty to only look in .hg/hgrc of current repo.
1240 if no HGRCPATH, use default os-specific path.'''
1241 if no HGRCPATH, use default os-specific path.'''
1241 global _rcpath
1242 global _rcpath
1242 if _rcpath is None:
1243 if _rcpath is None:
1243 if 'HGRCPATH' in os.environ:
1244 if 'HGRCPATH' in os.environ:
1244 _rcpath = []
1245 _rcpath = []
1245 for p in os.environ['HGRCPATH'].split(os.pathsep):
1246 for p in os.environ['HGRCPATH'].split(os.pathsep):
1246 if not p: continue
1247 if not p: continue
1247 if os.path.isdir(p):
1248 if os.path.isdir(p):
1248 for f in os.listdir(p):
1249 for f in os.listdir(p):
1249 if f.endswith('.rc'):
1250 if f.endswith('.rc'):
1250 _rcpath.append(os.path.join(p, f))
1251 _rcpath.append(os.path.join(p, f))
1251 else:
1252 else:
1252 _rcpath.append(p)
1253 _rcpath.append(p)
1253 else:
1254 else:
1254 _rcpath = os_rcpath()
1255 _rcpath = os_rcpath()
1255 return _rcpath
1256 return _rcpath
1256
1257
1257 def bytecount(nbytes):
1258 def bytecount(nbytes):
1258 '''return byte count formatted as readable string, with units'''
1259 '''return byte count formatted as readable string, with units'''
1259
1260
1260 units = (
1261 units = (
1261 (100, 1<<30, _('%.0f GB')),
1262 (100, 1<<30, _('%.0f GB')),
1262 (10, 1<<30, _('%.1f GB')),
1263 (10, 1<<30, _('%.1f GB')),
1263 (1, 1<<30, _('%.2f GB')),
1264 (1, 1<<30, _('%.2f GB')),
1264 (100, 1<<20, _('%.0f MB')),
1265 (100, 1<<20, _('%.0f MB')),
1265 (10, 1<<20, _('%.1f MB')),
1266 (10, 1<<20, _('%.1f MB')),
1266 (1, 1<<20, _('%.2f MB')),
1267 (1, 1<<20, _('%.2f MB')),
1267 (100, 1<<10, _('%.0f KB')),
1268 (100, 1<<10, _('%.0f KB')),
1268 (10, 1<<10, _('%.1f KB')),
1269 (10, 1<<10, _('%.1f KB')),
1269 (1, 1<<10, _('%.2f KB')),
1270 (1, 1<<10, _('%.2f KB')),
1270 (1, 1, _('%.0f bytes')),
1271 (1, 1, _('%.0f bytes')),
1271 )
1272 )
1272
1273
1273 for multiplier, divisor, format in units:
1274 for multiplier, divisor, format in units:
1274 if nbytes >= divisor * multiplier:
1275 if nbytes >= divisor * multiplier:
1275 return format % (nbytes / float(divisor))
1276 return format % (nbytes / float(divisor))
1276 return units[-1][2] % nbytes
1277 return units[-1][2] % nbytes
1277
1278
1278 def drop_scheme(scheme, path):
1279 def drop_scheme(scheme, path):
1279 sc = scheme + ':'
1280 sc = scheme + ':'
1280 if path.startswith(sc):
1281 if path.startswith(sc):
1281 path = path[len(sc):]
1282 path = path[len(sc):]
1282 if path.startswith('//'):
1283 if path.startswith('//'):
1283 path = path[2:]
1284 path = path[2:]
1284 return path
1285 return path
1 NO CONTENT: modified file, binary diff hidden
NO CONTENT: modified file, binary diff hidden
@@ -1,49 +1,54
1 #!/bin/sh
1 #!/bin/sh
2
2
3 hg init t
3 hg init t
4 cd t
4 cd t
5
5
6 # we need a repo with some legacy latin-1 changesets
6 # we need a repo with some legacy latin-1 changesets
7 hg unbundle $TESTDIR/legacy-encoding.hg
7 hg unbundle $TESTDIR/legacy-encoding.hg
8 hg co
8 hg co
9
9
10 python << EOF
10 python << EOF
11 f = file('latin-1', 'w'); f.write("latin-1 e' encoded: \xe9"); f.close()
11 f = file('latin-1', 'w'); f.write("latin-1 e' encoded: \xe9"); f.close()
12 f = file('utf-8', 'w'); f.write("utf-8 e' encoded: \xc3\xa9"); f.close()
12 f = file('utf-8', 'w'); f.write("utf-8 e' encoded: \xc3\xa9"); f.close()
13 f = file('latin-1-tag', 'w'); f.write("\xe9"); f.close()
13 f = file('latin-1-tag', 'w'); f.write("\xe9"); f.close()
14 EOF
14 EOF
15
15
16 echo % should fail with encoding error
16 echo % should fail with encoding error
17 echo "plain old ascii" > a
17 echo "plain old ascii" > a
18 hg st
18 hg st
19 HGENCODING=ascii hg ci -l latin-1 -d "0 0"
19 HGENCODING=ascii hg ci -l latin-1 -d "0 0"
20
20
21 echo % these should work
21 echo % these should work
22 echo "latin-1" > a
22 echo "latin-1" > a
23 HGENCODING=latin-1 hg ci -l latin-1 -d "0 0"
23 HGENCODING=latin-1 hg ci -l latin-1 -d "0 0"
24 echo "utf-8" > a
24 echo "utf-8" > a
25 HGENCODING=utf-8 hg ci -l utf-8 -d "0 0"
25 HGENCODING=utf-8 hg ci -l utf-8 -d "0 0"
26
26
27 HGENCODING=latin-1 hg tag -d "0 0" `cat latin-1-tag`
27 HGENCODING=latin-1 hg tag -d "0 0" `cat latin-1-tag`
28 cp latin-1-tag .hg/branch
28 cp latin-1-tag .hg/branch
29 HGENCODING=latin-1 hg ci -d "0 0" -m 'latin1 branch'
29 HGENCODING=latin-1 hg ci -d "0 0" -m 'latin1 branch'
30 rm .hg/branch
30 rm .hg/branch
31
31
32 echo % ascii
32 echo % ascii
33 hg --encoding ascii log
33 hg --encoding ascii log
34 echo % latin-1
34 echo % latin-1
35 hg --encoding latin-1 log
35 hg --encoding latin-1 log
36 echo % utf-8
36 echo % utf-8
37 hg --encoding utf-8 log
37 hg --encoding utf-8 log
38 echo % ascii
38 echo % ascii
39 HGENCODING=ascii hg tags
39 HGENCODING=ascii hg tags
40 echo % latin-1
40 echo % latin-1
41 HGENCODING=latin-1 hg tags
41 HGENCODING=latin-1 hg tags
42 echo % utf-8
42 echo % utf-8
43 HGENCODING=utf-8 hg tags
43 HGENCODING=utf-8 hg tags
44 echo % ascii
44 echo % ascii
45 HGENCODING=ascii hg branches
45 HGENCODING=ascii hg branches
46 echo % latin-1
46 echo % latin-1
47 HGENCODING=latin-1 hg branches
47 HGENCODING=latin-1 hg branches
48 echo % utf-8
48 echo % utf-8
49 HGENCODING=utf-8 hg branches
49 HGENCODING=utf-8 hg branches
50
51 echo '[ui]' >> .hg/hgrc
52 echo 'fallbackencoding = euc-jp' >> .hg/hgrc
53 echo % utf-8
54 HGENCODING=utf-8 hg log
@@ -1,118 +1,167
1 adding changesets
1 adding changesets
2 adding manifests
2 adding manifests
3 adding file changes
3 adding file changes
4 added 1 changesets with 1 changes to 1 files
4 added 2 changesets with 2 changes to 1 files
5 (run 'hg update' to get a working copy)
5 (run 'hg update' to get a working copy)
6 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
6 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
7 % should fail with encoding error
7 % should fail with encoding error
8 M a
8 M a
9 ? latin-1
9 ? latin-1
10 ? latin-1-tag
10 ? latin-1-tag
11 ? utf-8
11 ? utf-8
12 abort: decoding near ' encoded: �': 'ascii' codec can't decode byte 0xe9 in position 20: ordinal not in range(128)!
12 abort: decoding near ' encoded: �': 'ascii' codec can't decode byte 0xe9 in position 20: ordinal not in range(128)!
13
13
14 transaction abort!
14 transaction abort!
15 rollback completed
15 rollback completed
16 % these should work
16 % these should work
17 % ascii
17 % ascii
18 changeset: 4:d8a5d9eaf41e
18 changeset: 5:e4ed49b8a8f0
19 branch: ?
19 branch: ?
20 tag: tip
20 tag: tip
21 user: test
21 user: test
22 date: Thu Jan 01 00:00:00 1970 +0000
22 date: Thu Jan 01 00:00:00 1970 +0000
23 summary: latin1 branch
23 summary: latin1 branch
24
24
25 changeset: 3:5edfc7acb541
25 changeset: 4:a02ca5a58e99
26 user: test
26 user: test
27 date: Thu Jan 01 00:00:00 1970 +0000
27 date: Thu Jan 01 00:00:00 1970 +0000
28 summary: Added tag ? for changeset 91878608adb3
28 summary: Added tag ? for changeset d47908dab82f
29
29
30 changeset: 2:91878608adb3
30 changeset: 3:d47908dab82f
31 tag: ?
31 tag: ?
32 user: test
32 user: test
33 date: Thu Jan 01 00:00:00 1970 +0000
33 date: Thu Jan 01 00:00:00 1970 +0000
34 summary: utf-8 e' encoded: ?
34 summary: utf-8 e' encoded: ?
35
35
36 changeset: 1:6355cacf842e
36 changeset: 2:9db1985f3097
37 user: test
37 user: test
38 date: Thu Jan 01 00:00:00 1970 +0000
38 date: Thu Jan 01 00:00:00 1970 +0000
39 summary: latin-1 e' encoded: ?
39 summary: latin-1 e' encoded: ?
40
40
41 changeset: 1:af6e0db4427c
42 user: test
43 date: Thu Jan 01 00:00:00 1970 +0000
44 summary: euc-jp: ?????? = u'\u65e5\u672c\u8a9e'
45
41 changeset: 0:60aad1dd20a9
46 changeset: 0:60aad1dd20a9
42 user: test
47 user: test
43 date: Thu Jan 01 00:00:00 1970 +0000
48 date: Thu Jan 01 00:00:00 1970 +0000
44 summary: latin-1 e': ?
49 summary: latin-1 e': ?
45
50
46 % latin-1
51 % latin-1
47 changeset: 4:d8a5d9eaf41e
52 changeset: 5:e4ed49b8a8f0
48 branch: �
53 branch: �
49 tag: tip
54 tag: tip
50 user: test
55 user: test
51 date: Thu Jan 01 00:00:00 1970 +0000
56 date: Thu Jan 01 00:00:00 1970 +0000
52 summary: latin1 branch
57 summary: latin1 branch
53
58
54 changeset: 3:5edfc7acb541
59 changeset: 4:a02ca5a58e99
55 user: test
60 user: test
56 date: Thu Jan 01 00:00:00 1970 +0000
61 date: Thu Jan 01 00:00:00 1970 +0000
57 summary: Added tag � for changeset 91878608adb3
62 summary: Added tag � for changeset d47908dab82f
58
63
59 changeset: 2:91878608adb3
64 changeset: 3:d47908dab82f
60 tag: �
65 tag: �
61 user: test
66 user: test
62 date: Thu Jan 01 00:00:00 1970 +0000
67 date: Thu Jan 01 00:00:00 1970 +0000
63 summary: utf-8 e' encoded: �
68 summary: utf-8 e' encoded: �
64
69
65 changeset: 1:6355cacf842e
70 changeset: 2:9db1985f3097
66 user: test
71 user: test
67 date: Thu Jan 01 00:00:00 1970 +0000
72 date: Thu Jan 01 00:00:00 1970 +0000
68 summary: latin-1 e' encoded: �
73 summary: latin-1 e' encoded: �
69
74
75 changeset: 1:af6e0db4427c
76 user: test
77 date: Thu Jan 01 00:00:00 1970 +0000
78 summary: euc-jp: ���ܸ� = u'\u65e5\u672c\u8a9e'
79
70 changeset: 0:60aad1dd20a9
80 changeset: 0:60aad1dd20a9
71 user: test
81 user: test
72 date: Thu Jan 01 00:00:00 1970 +0000
82 date: Thu Jan 01 00:00:00 1970 +0000
73 summary: latin-1 e': �
83 summary: latin-1 e': �
74
84
75 % utf-8
85 % utf-8
76 changeset: 4:d8a5d9eaf41e
86 changeset: 5:e4ed49b8a8f0
77 branch: é
87 branch: é
78 tag: tip
88 tag: tip
79 user: test
89 user: test
80 date: Thu Jan 01 00:00:00 1970 +0000
90 date: Thu Jan 01 00:00:00 1970 +0000
81 summary: latin1 branch
91 summary: latin1 branch
82
92
83 changeset: 3:5edfc7acb541
93 changeset: 4:a02ca5a58e99
84 user: test
94 user: test
85 date: Thu Jan 01 00:00:00 1970 +0000
95 date: Thu Jan 01 00:00:00 1970 +0000
86 summary: Added tag é for changeset 91878608adb3
96 summary: Added tag é for changeset d47908dab82f
87
97
88 changeset: 2:91878608adb3
98 changeset: 3:d47908dab82f
89 tag: é
99 tag: é
90 user: test
100 user: test
91 date: Thu Jan 01 00:00:00 1970 +0000
101 date: Thu Jan 01 00:00:00 1970 +0000
92 summary: utf-8 e' encoded: é
102 summary: utf-8 e' encoded: é
93
103
94 changeset: 1:6355cacf842e
104 changeset: 2:9db1985f3097
95 user: test
105 user: test
96 date: Thu Jan 01 00:00:00 1970 +0000
106 date: Thu Jan 01 00:00:00 1970 +0000
97 summary: latin-1 e' encoded: é
107 summary: latin-1 e' encoded: é
98
108
109 changeset: 1:af6e0db4427c
110 user: test
111 date: Thu Jan 01 00:00:00 1970 +0000
112 summary: euc-jp: ÆüËܸì = u'\u65e5\u672c\u8a9e'
113
99 changeset: 0:60aad1dd20a9
114 changeset: 0:60aad1dd20a9
100 user: test
115 user: test
101 date: Thu Jan 01 00:00:00 1970 +0000
116 date: Thu Jan 01 00:00:00 1970 +0000
102 summary: latin-1 e': é
117 summary: latin-1 e': é
103
118
104 % ascii
119 % ascii
105 tip 4:d8a5d9eaf41e
120 tip 5:e4ed49b8a8f0
106 ? 2:91878608adb3
121 ? 3:d47908dab82f
107 % latin-1
122 % latin-1
108 tip 4:d8a5d9eaf41e
123 tip 5:e4ed49b8a8f0
109 2:91878608adb3
124 3:d47908dab82f
125 % utf-8
126 tip 5:e4ed49b8a8f0
127 é 3:d47908dab82f
128 % ascii
129 ? 5:e4ed49b8a8f0
130 % latin-1
131 � 5:e4ed49b8a8f0
132 % utf-8
133 é 5:e4ed49b8a8f0
110 % utf-8
134 % utf-8
111 tip 4:d8a5d9eaf41e
135 changeset: 5:e4ed49b8a8f0
112 é 2:91878608adb3
136 branch: é
113 % ascii
137 tag: tip
114 ? 4:d8a5d9eaf41e
138 user: test
115 % latin-1
139 date: Thu Jan 01 00:00:00 1970 +0000
116 � 4:d8a5d9eaf41e
140 summary: latin1 branch
117 % utf-8
141
118 é 4:d8a5d9eaf41e
142 changeset: 4:a02ca5a58e99
143 user: test
144 date: Thu Jan 01 00:00:00 1970 +0000
145 summary: Added tag é for changeset d47908dab82f
146
147 changeset: 3:d47908dab82f
148 tag: é
149 user: test
150 date: Thu Jan 01 00:00:00 1970 +0000
151 summary: utf-8 e' encoded: é
152
153 changeset: 2:9db1985f3097
154 user: test
155 date: Thu Jan 01 00:00:00 1970 +0000
156 summary: latin-1 e' encoded: é
157
158 changeset: 1:af6e0db4427c
159 user: test
160 date: Thu Jan 01 00:00:00 1970 +0000
161 summary: euc-jp: 日本語 = u'\u65e5\u672c\u8a9e'
162
163 changeset: 0:60aad1dd20a9
164 user: test
165 date: Thu Jan 01 00:00:00 1970 +0000
166 summary: latin-1 e': �
167
General Comments 0
You need to be logged in to leave comments. Login now