##// END OF EJS Templates
merge
Vadim Gelfer -
r2638:8dadff05 merge default
parent child Browse files
Show More
@@ -1,452 +1,464 b''
1 HGRC(5)
1 HGRC(5)
2 =======
2 =======
3 Bryan O'Sullivan <bos@serpentine.com>
3 Bryan O'Sullivan <bos@serpentine.com>
4
4
5 NAME
5 NAME
6 ----
6 ----
7 hgrc - configuration files for Mercurial
7 hgrc - configuration files for Mercurial
8
8
9 SYNOPSIS
9 SYNOPSIS
10 --------
10 --------
11
11
12 The Mercurial system uses a set of configuration files to control
12 The Mercurial system uses a set of configuration files to control
13 aspects of its behaviour.
13 aspects of its behaviour.
14
14
15 FILES
15 FILES
16 -----
16 -----
17
17
18 Mercurial reads configuration data from several files, if they exist.
18 Mercurial reads configuration data from several files, if they exist.
19 The names of these files depend on the system on which Mercurial is
19 The names of these files depend on the system on which Mercurial is
20 installed.
20 installed.
21
21
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
22 (Unix) <install-root>/etc/mercurial/hgrc.d/*.rc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
23 (Unix) <install-root>/etc/mercurial/hgrc::
24 Per-installation configuration files, searched for in the
24 Per-installation configuration files, searched for in the
25 directory where Mercurial is installed. For example, if installed
25 directory where Mercurial is installed. For example, if installed
26 in /shared/tools, Mercurial will look in
26 in /shared/tools, Mercurial will look in
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
27 /shared/tools/etc/mercurial/hgrc. Options in these files apply to
28 all Mercurial commands executed by any user in any directory.
28 all Mercurial commands executed by any user in any directory.
29
29
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
30 (Unix) /etc/mercurial/hgrc.d/*.rc::
31 (Unix) /etc/mercurial/hgrc::
31 (Unix) /etc/mercurial/hgrc::
32 (Windows) C:\Mercurial\Mercurial.ini::
32 (Windows) C:\Mercurial\Mercurial.ini::
33 Per-system configuration files, for the system on which Mercurial
33 Per-system configuration files, for the system on which Mercurial
34 is running. Options in these files apply to all Mercurial
34 is running. Options in these files apply to all Mercurial
35 commands executed by any user in any directory. Options in these
35 commands executed by any user in any directory. Options in these
36 files override per-installation options.
36 files override per-installation options.
37
37
38 (Unix) $HOME/.hgrc::
38 (Unix) $HOME/.hgrc::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
39 (Windows) C:\Documents and Settings\USERNAME\Mercurial.ini::
40 (Windows) $HOME\Mercurial.ini::
40 (Windows) $HOME\Mercurial.ini::
41 Per-user configuration file, for the user running Mercurial.
41 Per-user configuration file, for the user running Mercurial.
42 Options in this file apply to all Mercurial commands executed by
42 Options in this file apply to all Mercurial commands executed by
43 any user in any directory. Options in this file override
43 any user in any directory. Options in this file override
44 per-installation and per-system options.
44 per-installation and per-system options.
45 On Windows system, one of these is chosen exclusively according
45 On Windows system, one of these is chosen exclusively according
46 to definition of HOME environment variable.
46 to definition of HOME environment variable.
47
47
48 (Unix, Windows) <repo>/.hg/hgrc::
48 (Unix, Windows) <repo>/.hg/hgrc::
49 Per-repository configuration options that only apply in a
49 Per-repository configuration options that only apply in a
50 particular repository. This file is not version-controlled, and
50 particular repository. This file is not version-controlled, and
51 will not get transferred during a "clone" operation. Options in
51 will not get transferred during a "clone" operation. Options in
52 this file override options in all other configuration files.
52 this file override options in all other configuration files.
53
53
54 SYNTAX
54 SYNTAX
55 ------
55 ------
56
56
57 A configuration file consists of sections, led by a "[section]" header
57 A configuration file consists of sections, led by a "[section]" header
58 and followed by "name: value" entries; "name=value" is also accepted.
58 and followed by "name: value" entries; "name=value" is also accepted.
59
59
60 [spam]
60 [spam]
61 eggs=ham
61 eggs=ham
62 green=
62 green=
63 eggs
63 eggs
64
64
65 Each line contains one entry. If the lines that follow are indented,
65 Each line contains one entry. If the lines that follow are indented,
66 they are treated as continuations of that entry.
66 they are treated as continuations of that entry.
67
67
68 Leading whitespace is removed from values. Empty lines are skipped.
68 Leading whitespace is removed from values. Empty lines are skipped.
69
69
70 The optional values can contain format strings which refer to other
70 The optional values can contain format strings which refer to other
71 values in the same section, or values in a special DEFAULT section.
71 values in the same section, or values in a special DEFAULT section.
72
72
73 Lines beginning with "#" or ";" are ignored and may be used to provide
73 Lines beginning with "#" or ";" are ignored and may be used to provide
74 comments.
74 comments.
75
75
76 SECTIONS
76 SECTIONS
77 --------
77 --------
78
78
79 This section describes the different sections that may appear in a
79 This section describes the different sections that may appear in a
80 Mercurial "hgrc" file, the purpose of each section, its possible
80 Mercurial "hgrc" file, the purpose of each section, its possible
81 keys, and their possible values.
81 keys, and their possible values.
82
82
83 decode/encode::
83 decode/encode::
84 Filters for transforming files on checkout/checkin. This would
84 Filters for transforming files on checkout/checkin. This would
85 typically be used for newline processing or other
85 typically be used for newline processing or other
86 localization/canonicalization of files.
86 localization/canonicalization of files.
87
87
88 Filters consist of a filter pattern followed by a filter command.
88 Filters consist of a filter pattern followed by a filter command.
89 Filter patterns are globs by default, rooted at the repository
89 Filter patterns are globs by default, rooted at the repository
90 root. For example, to match any file ending in ".txt" in the root
90 root. For example, to match any file ending in ".txt" in the root
91 directory only, use the pattern "*.txt". To match any file ending
91 directory only, use the pattern "*.txt". To match any file ending
92 in ".c" anywhere in the repository, use the pattern "**.c".
92 in ".c" anywhere in the repository, use the pattern "**.c".
93
93
94 The filter command can start with a specifier, either "pipe:" or
94 The filter command can start with a specifier, either "pipe:" or
95 "tempfile:". If no specifier is given, "pipe:" is used by default.
95 "tempfile:". If no specifier is given, "pipe:" is used by default.
96
96
97 A "pipe:" command must accept data on stdin and return the
97 A "pipe:" command must accept data on stdin and return the
98 transformed data on stdout.
98 transformed data on stdout.
99
99
100 Pipe example:
100 Pipe example:
101
101
102 [encode]
102 [encode]
103 # uncompress gzip files on checkin to improve delta compression
103 # uncompress gzip files on checkin to improve delta compression
104 # note: not necessarily a good idea, just an example
104 # note: not necessarily a good idea, just an example
105 *.gz = pipe: gunzip
105 *.gz = pipe: gunzip
106
106
107 [decode]
107 [decode]
108 # recompress gzip files when writing them to the working dir (we
108 # recompress gzip files when writing them to the working dir (we
109 # can safely omit "pipe:", because it's the default)
109 # can safely omit "pipe:", because it's the default)
110 *.gz = gzip
110 *.gz = gzip
111
111
112 A "tempfile:" command is a template. The string INFILE is replaced
112 A "tempfile:" command is a template. The string INFILE is replaced
113 with the name of a temporary file that contains the data to be
113 with the name of a temporary file that contains the data to be
114 filtered by the command. The string OUTFILE is replaced with the
114 filtered by the command. The string OUTFILE is replaced with the
115 name of an empty temporary file, where the filtered data must be
115 name of an empty temporary file, where the filtered data must be
116 written by the command.
116 written by the command.
117
117
118 NOTE: the tempfile mechanism is recommended for Windows systems,
118 NOTE: the tempfile mechanism is recommended for Windows systems,
119 where the standard shell I/O redirection operators often have
119 where the standard shell I/O redirection operators often have
120 strange effects. In particular, if you are doing line ending
120 strange effects. In particular, if you are doing line ending
121 conversion on Windows using the popular dos2unix and unix2dos
121 conversion on Windows using the popular dos2unix and unix2dos
122 programs, you *must* use the tempfile mechanism, as using pipes will
122 programs, you *must* use the tempfile mechanism, as using pipes will
123 corrupt the contents of your files.
123 corrupt the contents of your files.
124
124
125 Tempfile example:
125 Tempfile example:
126
126
127 [encode]
127 [encode]
128 # convert files to unix line ending conventions on checkin
128 # convert files to unix line ending conventions on checkin
129 **.txt = tempfile: dos2unix -n INFILE OUTFILE
129 **.txt = tempfile: dos2unix -n INFILE OUTFILE
130
130
131 [decode]
131 [decode]
132 # convert files to windows line ending conventions when writing
132 # convert files to windows line ending conventions when writing
133 # them to the working dir
133 # them to the working dir
134 **.txt = tempfile: unix2dos -n INFILE OUTFILE
134 **.txt = tempfile: unix2dos -n INFILE OUTFILE
135
135
136 email::
136 email::
137 Settings for extensions that send email messages.
137 Settings for extensions that send email messages.
138 from;;
138 from;;
139 Optional. Email address to use in "From" header and SMTP envelope
139 Optional. Email address to use in "From" header and SMTP envelope
140 of outgoing messages.
140 of outgoing messages.
141 method;;
141 method;;
142 Optional. Method to use to send email messages. If value is
142 Optional. Method to use to send email messages. If value is
143 "smtp" (default), use SMTP (see section "[mail]" for
143 "smtp" (default), use SMTP (see section "[mail]" for
144 configuration). Otherwise, use as name of program to run that
144 configuration). Otherwise, use as name of program to run that
145 acts like sendmail (takes "-f" option for sender, list of
145 acts like sendmail (takes "-f" option for sender, list of
146 recipients on command line, message on stdin). Normally, setting
146 recipients on command line, message on stdin). Normally, setting
147 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
147 this to "sendmail" or "/usr/sbin/sendmail" is enough to use
148 sendmail to send messages.
148 sendmail to send messages.
149
149
150 Email example:
150 Email example:
151
151
152 [email]
152 [email]
153 from = Joseph User <joe.user@example.com>
153 from = Joseph User <joe.user@example.com>
154 method = /usr/sbin/sendmail
154 method = /usr/sbin/sendmail
155
155
156 extensions::
156 extensions::
157 Mercurial has an extension mechanism for adding new features. To
157 Mercurial has an extension mechanism for adding new features. To
158 enable an extension, create an entry for it in this section.
158 enable an extension, create an entry for it in this section.
159
159
160 If you know that the extension is already in Python's search path,
160 If you know that the extension is already in Python's search path,
161 you can give the name of the module, followed by "=", with nothing
161 you can give the name of the module, followed by "=", with nothing
162 after the "=".
162 after the "=".
163
163
164 Otherwise, give a name that you choose, followed by "=", followed by
164 Otherwise, give a name that you choose, followed by "=", followed by
165 the path to the ".py" file (including the file name extension) that
165 the path to the ".py" file (including the file name extension) that
166 defines the extension.
166 defines the extension.
167
167
168 Example for ~/.hgrc:
168 Example for ~/.hgrc:
169
169
170 [extensions]
170 [extensions]
171 # (the mq extension will get loaded from mercurial's path)
171 # (the mq extension will get loaded from mercurial's path)
172 hgext.mq =
172 hgext.mq =
173 # (this extension will get loaded from the file specified)
173 # (this extension will get loaded from the file specified)
174 myfeature = ~/.hgext/myfeature.py
174 myfeature = ~/.hgext/myfeature.py
175
175
176 hooks::
176 hooks::
177 Commands or Python functions that get automatically executed by
177 Commands or Python functions that get automatically executed by
178 various actions such as starting or finishing a commit. Multiple
178 various actions such as starting or finishing a commit. Multiple
179 hooks can be run for the same action by appending a suffix to the
179 hooks can be run for the same action by appending a suffix to the
180 action. Overriding a site-wide hook can be done by changing its
180 action. Overriding a site-wide hook can be done by changing its
181 value or setting it to an empty string.
181 value or setting it to an empty string.
182
182
183 Example .hg/hgrc:
183 Example .hg/hgrc:
184
184
185 [hooks]
185 [hooks]
186 # do not use the site-wide hook
186 # do not use the site-wide hook
187 incoming =
187 incoming =
188 incoming.email = /my/email/hook
188 incoming.email = /my/email/hook
189 incoming.autobuild = /my/build/hook
189 incoming.autobuild = /my/build/hook
190
190
191 Most hooks are run with environment variables set that give added
191 Most hooks are run with environment variables set that give added
192 useful information. For each hook below, the environment variables
192 useful information. For each hook below, the environment variables
193 it is passed are listed with names of the form "$HG_foo".
193 it is passed are listed with names of the form "$HG_foo".
194
194
195 changegroup;;
195 changegroup;;
196 Run after a changegroup has been added via push, pull or
196 Run after a changegroup has been added via push, pull or
197 unbundle. ID of the first new changeset is in $HG_NODE.
197 unbundle. ID of the first new changeset is in $HG_NODE.
198 commit;;
198 commit;;
199 Run after a changeset has been created in the local repository.
199 Run after a changeset has been created in the local repository.
200 ID of the newly created changeset is in $HG_NODE. Parent
200 ID of the newly created changeset is in $HG_NODE. Parent
201 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
201 changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
202 incoming;;
202 incoming;;
203 Run after a changeset has been pulled, pushed, or unbundled into
203 Run after a changeset has been pulled, pushed, or unbundled into
204 the local repository. The ID of the newly arrived changeset is in
204 the local repository. The ID of the newly arrived changeset is in
205 $HG_NODE.
205 $HG_NODE.
206 outgoing;;
206 outgoing;;
207 Run after sending changes from local repository to another. ID of
207 Run after sending changes from local repository to another. ID of
208 first changeset sent is in $HG_NODE. Source of operation is in
208 first changeset sent is in $HG_NODE. Source of operation is in
209 $HG_SOURCE; see "preoutgoing" hook for description.
209 $HG_SOURCE; see "preoutgoing" hook for description.
210 prechangegroup;;
210 prechangegroup;;
211 Run before a changegroup is added via push, pull or unbundle.
211 Run before a changegroup is added via push, pull or unbundle.
212 Exit status 0 allows the changegroup to proceed. Non-zero status
212 Exit status 0 allows the changegroup to proceed. Non-zero status
213 will cause the push, pull or unbundle to fail.
213 will cause the push, pull or unbundle to fail.
214 precommit;;
214 precommit;;
215 Run before starting a local commit. Exit status 0 allows the
215 Run before starting a local commit. Exit status 0 allows the
216 commit to proceed. Non-zero status will cause the commit to fail.
216 commit to proceed. Non-zero status will cause the commit to fail.
217 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
217 Parent changeset IDs are in $HG_PARENT1 and $HG_PARENT2.
218 preoutgoing;;
218 preoutgoing;;
219 Run before computing changes to send from the local repository to
219 Run before computing changes to send from the local repository to
220 another. Non-zero status will cause failure. This lets you
220 another. Non-zero status will cause failure. This lets you
221 prevent pull over http or ssh. Also prevents against local pull,
221 prevent pull over http or ssh. Also prevents against local pull,
222 push (outbound) or bundle commands, but not effective, since you
222 push (outbound) or bundle commands, but not effective, since you
223 can just copy files instead then. Source of operation is in
223 can just copy files instead then. Source of operation is in
224 $HG_SOURCE. If "serve", operation is happening on behalf of
224 $HG_SOURCE. If "serve", operation is happening on behalf of
225 remote ssh or http repository. If "push", "pull" or "bundle",
225 remote ssh or http repository. If "push", "pull" or "bundle",
226 operation is happening on behalf of repository on same system.
226 operation is happening on behalf of repository on same system.
227 pretag;;
227 pretag;;
228 Run before creating a tag. Exit status 0 allows the tag to be
228 Run before creating a tag. Exit status 0 allows the tag to be
229 created. Non-zero status will cause the tag to fail. ID of
229 created. Non-zero status will cause the tag to fail. ID of
230 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
230 changeset to tag is in $HG_NODE. Name of tag is in $HG_TAG. Tag
231 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
231 is local if $HG_LOCAL=1, in repo if $HG_LOCAL=0.
232 pretxnchangegroup;;
232 pretxnchangegroup;;
233 Run after a changegroup has been added via push, pull or unbundle,
233 Run after a changegroup has been added via push, pull or unbundle,
234 but before the transaction has been committed. Changegroup is
234 but before the transaction has been committed. Changegroup is
235 visible to hook program. This lets you validate incoming changes
235 visible to hook program. This lets you validate incoming changes
236 before accepting them. Passed the ID of the first new changeset
236 before accepting them. Passed the ID of the first new changeset
237 in $HG_NODE. Exit status 0 allows the transaction to commit.
237 in $HG_NODE. Exit status 0 allows the transaction to commit.
238 Non-zero status will cause the transaction to be rolled back and
238 Non-zero status will cause the transaction to be rolled back and
239 the push, pull or unbundle will fail.
239 the push, pull or unbundle will fail.
240 pretxncommit;;
240 pretxncommit;;
241 Run after a changeset has been created but the transaction not yet
241 Run after a changeset has been created but the transaction not yet
242 committed. Changeset is visible to hook program. This lets you
242 committed. Changeset is visible to hook program. This lets you
243 validate commit message and changes. Exit status 0 allows the
243 validate commit message and changes. Exit status 0 allows the
244 commit to proceed. Non-zero status will cause the transaction to
244 commit to proceed. Non-zero status will cause the transaction to
245 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
245 be rolled back. ID of changeset is in $HG_NODE. Parent changeset
246 IDs are in $HG_PARENT1 and $HG_PARENT2.
246 IDs are in $HG_PARENT1 and $HG_PARENT2.
247 preupdate;;
247 preupdate;;
248 Run before updating the working directory. Exit status 0 allows
248 Run before updating the working directory. Exit status 0 allows
249 the update to proceed. Non-zero status will prevent the update.
249 the update to proceed. Non-zero status will prevent the update.
250 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
250 Changeset ID of first new parent is in $HG_PARENT1. If merge, ID
251 of second new parent is in $HG_PARENT2.
251 of second new parent is in $HG_PARENT2.
252 tag;;
252 tag;;
253 Run after a tag is created. ID of tagged changeset is in
253 Run after a tag is created. ID of tagged changeset is in
254 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
254 $HG_NODE. Name of tag is in $HG_TAG. Tag is local if
255 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
255 $HG_LOCAL=1, in repo if $HG_LOCAL=0.
256 update;;
256 update;;
257 Run after updating the working directory. Changeset ID of first
257 Run after updating the working directory. Changeset ID of first
258 new parent is in $HG_PARENT1. If merge, ID of second new parent
258 new parent is in $HG_PARENT1. If merge, ID of second new parent
259 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
259 is in $HG_PARENT2. If update succeeded, $HG_ERROR=0. If update
260 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
260 failed (e.g. because conflicts not resolved), $HG_ERROR=1.
261
261
262 Note: In earlier releases, the names of hook environment variables
262 Note: In earlier releases, the names of hook environment variables
263 did not have a "HG_" prefix. The old unprefixed names are no longer
263 did not have a "HG_" prefix. The old unprefixed names are no longer
264 provided in the environment.
264 provided in the environment.
265
265
266 The syntax for Python hooks is as follows:
266 The syntax for Python hooks is as follows:
267
267
268 hookname = python:modulename.submodule.callable
268 hookname = python:modulename.submodule.callable
269
269
270 Python hooks are run within the Mercurial process. Each hook is
270 Python hooks are run within the Mercurial process. Each hook is
271 called with at least three keyword arguments: a ui object (keyword
271 called with at least three keyword arguments: a ui object (keyword
272 "ui"), a repository object (keyword "repo"), and a "hooktype"
272 "ui"), a repository object (keyword "repo"), and a "hooktype"
273 keyword that tells what kind of hook is used. Arguments listed as
273 keyword that tells what kind of hook is used. Arguments listed as
274 environment variables above are passed as keyword arguments, with no
274 environment variables above are passed as keyword arguments, with no
275 "HG_" prefix, and names in lower case.
275 "HG_" prefix, and names in lower case.
276
276
277 A Python hook must return a "true" value to succeed. Returning a
277 A Python hook must return a "true" value to succeed. Returning a
278 "false" value or raising an exception is treated as failure of the
278 "false" value or raising an exception is treated as failure of the
279 hook.
279 hook.
280
280
281 http_proxy::
281 http_proxy::
282 Used to access web-based Mercurial repositories through a HTTP
282 Used to access web-based Mercurial repositories through a HTTP
283 proxy.
283 proxy.
284 host;;
284 host;;
285 Host name and (optional) port of the proxy server, for example
285 Host name and (optional) port of the proxy server, for example
286 "myproxy:8000".
286 "myproxy:8000".
287 no;;
287 no;;
288 Optional. Comma-separated list of host names that should bypass
288 Optional. Comma-separated list of host names that should bypass
289 the proxy.
289 the proxy.
290 passwd;;
290 passwd;;
291 Optional. Password to authenticate with at the proxy server.
291 Optional. Password to authenticate with at the proxy server.
292 user;;
292 user;;
293 Optional. User name to authenticate with at the proxy server.
293 Optional. User name to authenticate with at the proxy server.
294
294
295 smtp::
295 smtp::
296 Configuration for extensions that need to send email messages.
296 Configuration for extensions that need to send email messages.
297 host;;
297 host;;
298 Optional. Host name of mail server. Default: "mail".
298 Optional. Host name of mail server. Default: "mail".
299 port;;
299 port;;
300 Optional. Port to connect to on mail server. Default: 25.
300 Optional. Port to connect to on mail server. Default: 25.
301 tls;;
301 tls;;
302 Optional. Whether to connect to mail server using TLS. True or
302 Optional. Whether to connect to mail server using TLS. True or
303 False. Default: False.
303 False. Default: False.
304 username;;
304 username;;
305 Optional. User name to authenticate to SMTP server with.
305 Optional. User name to authenticate to SMTP server with.
306 If username is specified, password must also be specified.
306 If username is specified, password must also be specified.
307 Default: none.
307 Default: none.
308 password;;
308 password;;
309 Optional. Password to authenticate to SMTP server with.
309 Optional. Password to authenticate to SMTP server with.
310 If username is specified, password must also be specified.
310 If username is specified, password must also be specified.
311 Default: none.
311 Default: none.
312 local_hostname;;
312 local_hostname;;
313 Optional. It's the hostname that the sender can use to identify itself
313 Optional. It's the hostname that the sender can use to identify itself
314 to the MTA.
314 to the MTA.
315
315
316 paths::
316 paths::
317 Assigns symbolic names to repositories. The left side is the
317 Assigns symbolic names to repositories. The left side is the
318 symbolic name, and the right gives the directory or URL that is the
318 symbolic name, and the right gives the directory or URL that is the
319 location of the repository. Default paths can be declared by
319 location of the repository. Default paths can be declared by
320 setting the following entries.
320 setting the following entries.
321 default;;
321 default;;
322 Directory or URL to use when pulling if no source is specified.
322 Directory or URL to use when pulling if no source is specified.
323 Default is set to repository from which the current repository
323 Default is set to repository from which the current repository
324 was cloned.
324 was cloned.
325 default-push;;
325 default-push;;
326 Optional. Directory or URL to use when pushing if no destination
326 Optional. Directory or URL to use when pushing if no destination
327 is specified.
327 is specified.
328
328
329 server::
330 Controls generic server settings.
331 uncompressed;;
332 Whether to allow clients to clone a repo using the uncompressed
333 streaming protocol. This transfers about 40% more data than a
334 regular clone, but uses less memory and CPU on both server and
335 client. Over a LAN (100Mbps or better) or a very fast WAN, an
336 uncompressed streaming clone is a lot faster (~10x) than a regular
337 clone. Over most WAN connections (anything slower than about
338 6Mbps), uncompressed streaming is slower, because of the extra
339 data transfer overhead. Default is False.
340
329 ui::
341 ui::
330 User interface controls.
342 User interface controls.
331 debug;;
343 debug;;
332 Print debugging information. True or False. Default is False.
344 Print debugging information. True or False. Default is False.
333 editor;;
345 editor;;
334 The editor to use during a commit. Default is $EDITOR or "vi".
346 The editor to use during a commit. Default is $EDITOR or "vi".
335 ignore;;
347 ignore;;
336 A file to read per-user ignore patterns from. This file should be in
348 A file to read per-user ignore patterns from. This file should be in
337 the same format as a repository-wide .hgignore file. This option
349 the same format as a repository-wide .hgignore file. This option
338 supports hook syntax, so if you want to specify multiple ignore
350 supports hook syntax, so if you want to specify multiple ignore
339 files, you can do so by setting something like
351 files, you can do so by setting something like
340 "ignore.other = ~/.hgignore2". For details of the ignore file
352 "ignore.other = ~/.hgignore2". For details of the ignore file
341 format, see the hgignore(5) man page.
353 format, see the hgignore(5) man page.
342 interactive;;
354 interactive;;
343 Allow to prompt the user. True or False. Default is True.
355 Allow to prompt the user. True or False. Default is True.
344 logtemplate;;
356 logtemplate;;
345 Template string for commands that print changesets.
357 Template string for commands that print changesets.
346 style;;
358 style;;
347 Name of style to use for command output.
359 Name of style to use for command output.
348 merge;;
360 merge;;
349 The conflict resolution program to use during a manual merge.
361 The conflict resolution program to use during a manual merge.
350 Default is "hgmerge".
362 Default is "hgmerge".
351 quiet;;
363 quiet;;
352 Reduce the amount of output printed. True or False. Default is False.
364 Reduce the amount of output printed. True or False. Default is False.
353 remotecmd;;
365 remotecmd;;
354 remote command to use for clone/push/pull operations. Default is 'hg'.
366 remote command to use for clone/push/pull operations. Default is 'hg'.
355 ssh;;
367 ssh;;
356 command to use for SSH connections. Default is 'ssh'.
368 command to use for SSH connections. Default is 'ssh'.
357 timeout;;
369 timeout;;
358 The timeout used when a lock is held (in seconds), a negative value
370 The timeout used when a lock is held (in seconds), a negative value
359 means no timeout. Default is 600.
371 means no timeout. Default is 600.
360 username;;
372 username;;
361 The committer of a changeset created when running "commit".
373 The committer of a changeset created when running "commit".
362 Typically a person's name and email address, e.g. "Fred Widget
374 Typically a person's name and email address, e.g. "Fred Widget
363 <fred@example.com>". Default is $EMAIL or username@hostname, unless
375 <fred@example.com>". Default is $EMAIL or username@hostname, unless
364 username is set to an empty string, which enforces specifying the
376 username is set to an empty string, which enforces specifying the
365 username manually.
377 username manually.
366 verbose;;
378 verbose;;
367 Increase the amount of output printed. True or False. Default is False.
379 Increase the amount of output printed. True or False. Default is False.
368
380
369
381
370 web::
382 web::
371 Web interface configuration.
383 Web interface configuration.
372 accesslog;;
384 accesslog;;
373 Where to output the access log. Default is stdout.
385 Where to output the access log. Default is stdout.
374 address;;
386 address;;
375 Interface address to bind to. Default is all.
387 Interface address to bind to. Default is all.
376 allow_archive;;
388 allow_archive;;
377 List of archive format (bz2, gz, zip) allowed for downloading.
389 List of archive format (bz2, gz, zip) allowed for downloading.
378 Default is empty.
390 Default is empty.
379 allowbz2;;
391 allowbz2;;
380 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
392 (DEPRECATED) Whether to allow .tar.bz2 downloading of repo revisions.
381 Default is false.
393 Default is false.
382 allowgz;;
394 allowgz;;
383 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
395 (DEPRECATED) Whether to allow .tar.gz downloading of repo revisions.
384 Default is false.
396 Default is false.
385 allowpull;;
397 allowpull;;
386 Whether to allow pulling from the repository. Default is true.
398 Whether to allow pulling from the repository. Default is true.
387 allow_push;;
399 allow_push;;
388 Whether to allow pushing to the repository. If empty or not set,
400 Whether to allow pushing to the repository. If empty or not set,
389 push is not allowed. If the special value "*", any remote user
401 push is not allowed. If the special value "*", any remote user
390 can push, including unauthenticated users. Otherwise, the remote
402 can push, including unauthenticated users. Otherwise, the remote
391 user must have been authenticated, and the authenticated user name
403 user must have been authenticated, and the authenticated user name
392 must be present in this list (separated by whitespace or ",").
404 must be present in this list (separated by whitespace or ",").
393 The contents of the allow_push list are examined after the
405 The contents of the allow_push list are examined after the
394 deny_push list.
406 deny_push list.
395 allowzip;;
407 allowzip;;
396 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
408 (DEPRECATED) Whether to allow .zip downloading of repo revisions.
397 Default is false. This feature creates temporary files.
409 Default is false. This feature creates temporary files.
398 baseurl;;
410 baseurl;;
399 Base URL to use when publishing URLs in other locations, so
411 Base URL to use when publishing URLs in other locations, so
400 third-party tools like email notification hooks can construct URLs.
412 third-party tools like email notification hooks can construct URLs.
401 Example: "http://hgserver/repos/"
413 Example: "http://hgserver/repos/"
402 contact;;
414 contact;;
403 Name or email address of the person in charge of the repository.
415 Name or email address of the person in charge of the repository.
404 Default is "unknown".
416 Default is "unknown".
405 deny_push;;
417 deny_push;;
406 Whether to deny pushing to the repository. If empty or not set,
418 Whether to deny pushing to the repository. If empty or not set,
407 push is not denied. If the special value "*", all remote users
419 push is not denied. If the special value "*", all remote users
408 are denied push. Otherwise, unauthenticated users are all denied,
420 are denied push. Otherwise, unauthenticated users are all denied,
409 and any authenticated user name present in this list (separated by
421 and any authenticated user name present in this list (separated by
410 whitespace or ",") is also denied. The contents of the deny_push
422 whitespace or ",") is also denied. The contents of the deny_push
411 list are examined before the allow_push list.
423 list are examined before the allow_push list.
412 description;;
424 description;;
413 Textual description of the repository's purpose or contents.
425 Textual description of the repository's purpose or contents.
414 Default is "unknown".
426 Default is "unknown".
415 errorlog;;
427 errorlog;;
416 Where to output the error log. Default is stderr.
428 Where to output the error log. Default is stderr.
417 ipv6;;
429 ipv6;;
418 Whether to use IPv6. Default is false.
430 Whether to use IPv6. Default is false.
419 name;;
431 name;;
420 Repository name to use in the web interface. Default is current
432 Repository name to use in the web interface. Default is current
421 working directory.
433 working directory.
422 maxchanges;;
434 maxchanges;;
423 Maximum number of changes to list on the changelog. Default is 10.
435 Maximum number of changes to list on the changelog. Default is 10.
424 maxfiles;;
436 maxfiles;;
425 Maximum number of files to list per changeset. Default is 10.
437 Maximum number of files to list per changeset. Default is 10.
426 port;;
438 port;;
427 Port to listen on. Default is 8000.
439 Port to listen on. Default is 8000.
428 push_ssl;;
440 push_ssl;;
429 Whether to require that inbound pushes be transported over SSL to
441 Whether to require that inbound pushes be transported over SSL to
430 prevent password sniffing. Default is true.
442 prevent password sniffing. Default is true.
431 style;;
443 style;;
432 Which template map style to use.
444 Which template map style to use.
433 templates;;
445 templates;;
434 Where to find the HTML templates. Default is install path.
446 Where to find the HTML templates. Default is install path.
435
447
436
448
437 AUTHOR
449 AUTHOR
438 ------
450 ------
439 Bryan O'Sullivan <bos@serpentine.com>.
451 Bryan O'Sullivan <bos@serpentine.com>.
440
452
441 Mercurial was written by Matt Mackall <mpm@selenic.com>.
453 Mercurial was written by Matt Mackall <mpm@selenic.com>.
442
454
443 SEE ALSO
455 SEE ALSO
444 --------
456 --------
445 hg(1), hgignore(5)
457 hg(1), hgignore(5)
446
458
447 COPYING
459 COPYING
448 -------
460 -------
449 This manual page is copyright 2005 Bryan O'Sullivan.
461 This manual page is copyright 2005 Bryan O'Sullivan.
450 Mercurial is copyright 2005, 2006 Matt Mackall.
462 Mercurial is copyright 2005, 2006 Matt Mackall.
451 Free use of this software is granted under the terms of the GNU General
463 Free use of this software is granted under the terms of the GNU General
452 Public License (GPL).
464 Public License (GPL).
@@ -1,3521 +1,3528 b''
1 # commands.py - command processing for mercurial
1 # commands.py - command processing for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from demandload import demandload
8 from demandload import demandload
9 from node import *
9 from node import *
10 from i18n import gettext as _
10 from i18n import gettext as _
11 demandload(globals(), "os re sys signal shutil imp urllib pdb")
11 demandload(globals(), "os re sys signal shutil imp urllib pdb")
12 demandload(globals(), "fancyopts ui hg util lock revlog templater bundlerepo")
12 demandload(globals(), "fancyopts ui hg util lock revlog templater bundlerepo")
13 demandload(globals(), "fnmatch mdiff random signal tempfile time")
13 demandload(globals(), "fnmatch mdiff random signal tempfile time")
14 demandload(globals(), "traceback errno socket version struct atexit sets bz2")
14 demandload(globals(), "traceback errno socket version struct atexit sets bz2")
15 demandload(globals(), "archival cStringIO changegroup email.Parser")
15 demandload(globals(), "archival cStringIO changegroup email.Parser")
16 demandload(globals(), "hgweb.server sshserver")
16 demandload(globals(), "hgweb.server sshserver")
17
17
18 class UnknownCommand(Exception):
18 class UnknownCommand(Exception):
19 """Exception raised if command is not in the command table."""
19 """Exception raised if command is not in the command table."""
20 class AmbiguousCommand(Exception):
20 class AmbiguousCommand(Exception):
21 """Exception raised if command shortcut matches more than one command."""
21 """Exception raised if command shortcut matches more than one command."""
22
22
23 def bail_if_changed(repo):
23 def bail_if_changed(repo):
24 modified, added, removed, deleted, unknown = repo.changes()
24 modified, added, removed, deleted, unknown = repo.changes()
25 if modified or added or removed or deleted:
25 if modified or added or removed or deleted:
26 raise util.Abort(_("outstanding uncommitted changes"))
26 raise util.Abort(_("outstanding uncommitted changes"))
27
27
28 def filterfiles(filters, files):
28 def filterfiles(filters, files):
29 l = [x for x in files if x in filters]
29 l = [x for x in files if x in filters]
30
30
31 for t in filters:
31 for t in filters:
32 if t and t[-1] != "/":
32 if t and t[-1] != "/":
33 t += "/"
33 t += "/"
34 l += [x for x in files if x.startswith(t)]
34 l += [x for x in files if x.startswith(t)]
35 return l
35 return l
36
36
37 def relpath(repo, args):
37 def relpath(repo, args):
38 cwd = repo.getcwd()
38 cwd = repo.getcwd()
39 if cwd:
39 if cwd:
40 return [util.normpath(os.path.join(cwd, x)) for x in args]
40 return [util.normpath(os.path.join(cwd, x)) for x in args]
41 return args
41 return args
42
42
43 def matchpats(repo, pats=[], opts={}, head=''):
43 def matchpats(repo, pats=[], opts={}, head=''):
44 cwd = repo.getcwd()
44 cwd = repo.getcwd()
45 if not pats and cwd:
45 if not pats and cwd:
46 opts['include'] = [os.path.join(cwd, i) for i in opts['include']]
46 opts['include'] = [os.path.join(cwd, i) for i in opts['include']]
47 opts['exclude'] = [os.path.join(cwd, x) for x in opts['exclude']]
47 opts['exclude'] = [os.path.join(cwd, x) for x in opts['exclude']]
48 cwd = ''
48 cwd = ''
49 return util.cmdmatcher(repo.root, cwd, pats or ['.'], opts.get('include'),
49 return util.cmdmatcher(repo.root, cwd, pats or ['.'], opts.get('include'),
50 opts.get('exclude'), head)
50 opts.get('exclude'), head)
51
51
52 def makewalk(repo, pats, opts, node=None, head='', badmatch=None):
52 def makewalk(repo, pats, opts, node=None, head='', badmatch=None):
53 files, matchfn, anypats = matchpats(repo, pats, opts, head)
53 files, matchfn, anypats = matchpats(repo, pats, opts, head)
54 exact = dict(zip(files, files))
54 exact = dict(zip(files, files))
55 def walk():
55 def walk():
56 for src, fn in repo.walk(node=node, files=files, match=matchfn,
56 for src, fn in repo.walk(node=node, files=files, match=matchfn,
57 badmatch=badmatch):
57 badmatch=badmatch):
58 yield src, fn, util.pathto(repo.getcwd(), fn), fn in exact
58 yield src, fn, util.pathto(repo.getcwd(), fn), fn in exact
59 return files, matchfn, walk()
59 return files, matchfn, walk()
60
60
61 def walk(repo, pats, opts, node=None, head='', badmatch=None):
61 def walk(repo, pats, opts, node=None, head='', badmatch=None):
62 files, matchfn, results = makewalk(repo, pats, opts, node, head, badmatch)
62 files, matchfn, results = makewalk(repo, pats, opts, node, head, badmatch)
63 for r in results:
63 for r in results:
64 yield r
64 yield r
65
65
66 def walkchangerevs(ui, repo, pats, opts):
66 def walkchangerevs(ui, repo, pats, opts):
67 '''Iterate over files and the revs they changed in.
67 '''Iterate over files and the revs they changed in.
68
68
69 Callers most commonly need to iterate backwards over the history
69 Callers most commonly need to iterate backwards over the history
70 it is interested in. Doing so has awful (quadratic-looking)
70 it is interested in. Doing so has awful (quadratic-looking)
71 performance, so we use iterators in a "windowed" way.
71 performance, so we use iterators in a "windowed" way.
72
72
73 We walk a window of revisions in the desired order. Within the
73 We walk a window of revisions in the desired order. Within the
74 window, we first walk forwards to gather data, then in the desired
74 window, we first walk forwards to gather data, then in the desired
75 order (usually backwards) to display it.
75 order (usually backwards) to display it.
76
76
77 This function returns an (iterator, getchange, matchfn) tuple. The
77 This function returns an (iterator, getchange, matchfn) tuple. The
78 getchange function returns the changelog entry for a numeric
78 getchange function returns the changelog entry for a numeric
79 revision. The iterator yields 3-tuples. They will be of one of
79 revision. The iterator yields 3-tuples. They will be of one of
80 the following forms:
80 the following forms:
81
81
82 "window", incrementing, lastrev: stepping through a window,
82 "window", incrementing, lastrev: stepping through a window,
83 positive if walking forwards through revs, last rev in the
83 positive if walking forwards through revs, last rev in the
84 sequence iterated over - use to reset state for the current window
84 sequence iterated over - use to reset state for the current window
85
85
86 "add", rev, fns: out-of-order traversal of the given file names
86 "add", rev, fns: out-of-order traversal of the given file names
87 fns, which changed during revision rev - use to gather data for
87 fns, which changed during revision rev - use to gather data for
88 possible display
88 possible display
89
89
90 "iter", rev, None: in-order traversal of the revs earlier iterated
90 "iter", rev, None: in-order traversal of the revs earlier iterated
91 over with "add" - use to display data'''
91 over with "add" - use to display data'''
92
92
93 def increasing_windows(start, end, windowsize=8, sizelimit=512):
93 def increasing_windows(start, end, windowsize=8, sizelimit=512):
94 if start < end:
94 if start < end:
95 while start < end:
95 while start < end:
96 yield start, min(windowsize, end-start)
96 yield start, min(windowsize, end-start)
97 start += windowsize
97 start += windowsize
98 if windowsize < sizelimit:
98 if windowsize < sizelimit:
99 windowsize *= 2
99 windowsize *= 2
100 else:
100 else:
101 while start > end:
101 while start > end:
102 yield start, min(windowsize, start-end-1)
102 yield start, min(windowsize, start-end-1)
103 start -= windowsize
103 start -= windowsize
104 if windowsize < sizelimit:
104 if windowsize < sizelimit:
105 windowsize *= 2
105 windowsize *= 2
106
106
107
107
108 files, matchfn, anypats = matchpats(repo, pats, opts)
108 files, matchfn, anypats = matchpats(repo, pats, opts)
109
109
110 if repo.changelog.count() == 0:
110 if repo.changelog.count() == 0:
111 return [], False, matchfn
111 return [], False, matchfn
112
112
113 revs = map(int, revrange(ui, repo, opts['rev'] or ['tip:0']))
113 revs = map(int, revrange(ui, repo, opts['rev'] or ['tip:0']))
114 wanted = {}
114 wanted = {}
115 slowpath = anypats
115 slowpath = anypats
116 fncache = {}
116 fncache = {}
117
117
118 chcache = {}
118 chcache = {}
119 def getchange(rev):
119 def getchange(rev):
120 ch = chcache.get(rev)
120 ch = chcache.get(rev)
121 if ch is None:
121 if ch is None:
122 chcache[rev] = ch = repo.changelog.read(repo.lookup(str(rev)))
122 chcache[rev] = ch = repo.changelog.read(repo.lookup(str(rev)))
123 return ch
123 return ch
124
124
125 if not slowpath and not files:
125 if not slowpath and not files:
126 # No files, no patterns. Display all revs.
126 # No files, no patterns. Display all revs.
127 wanted = dict(zip(revs, revs))
127 wanted = dict(zip(revs, revs))
128 if not slowpath:
128 if not slowpath:
129 # Only files, no patterns. Check the history of each file.
129 # Only files, no patterns. Check the history of each file.
130 def filerevgen(filelog):
130 def filerevgen(filelog):
131 cl_count = repo.changelog.count()
131 for i, window in increasing_windows(filelog.count()-1, -1):
132 for i, window in increasing_windows(filelog.count()-1, -1):
132 revs = []
133 revs = []
133 for j in xrange(i - window, i + 1):
134 for j in xrange(i - window, i + 1):
134 revs.append(filelog.linkrev(filelog.node(j)))
135 revs.append(filelog.linkrev(filelog.node(j)))
135 revs.reverse()
136 revs.reverse()
136 for rev in revs:
137 for rev in revs:
137 yield rev
138 # only yield rev for which we have the changelog, it can
139 # happen while doing "hg log" during a pull or commit
140 if rev < cl_count:
141 yield rev
138
142
139 minrev, maxrev = min(revs), max(revs)
143 minrev, maxrev = min(revs), max(revs)
140 for file_ in files:
144 for file_ in files:
141 filelog = repo.file(file_)
145 filelog = repo.file(file_)
142 # A zero count may be a directory or deleted file, so
146 # A zero count may be a directory or deleted file, so
143 # try to find matching entries on the slow path.
147 # try to find matching entries on the slow path.
144 if filelog.count() == 0:
148 if filelog.count() == 0:
145 slowpath = True
149 slowpath = True
146 break
150 break
147 for rev in filerevgen(filelog):
151 for rev in filerevgen(filelog):
148 if rev <= maxrev:
152 if rev <= maxrev:
149 if rev < minrev:
153 if rev < minrev:
150 break
154 break
151 fncache.setdefault(rev, [])
155 fncache.setdefault(rev, [])
152 fncache[rev].append(file_)
156 fncache[rev].append(file_)
153 wanted[rev] = 1
157 wanted[rev] = 1
154 if slowpath:
158 if slowpath:
155 # The slow path checks files modified in every changeset.
159 # The slow path checks files modified in every changeset.
156 def changerevgen():
160 def changerevgen():
157 for i, window in increasing_windows(repo.changelog.count()-1, -1):
161 for i, window in increasing_windows(repo.changelog.count()-1, -1):
158 for j in xrange(i - window, i + 1):
162 for j in xrange(i - window, i + 1):
159 yield j, getchange(j)[3]
163 yield j, getchange(j)[3]
160
164
161 for rev, changefiles in changerevgen():
165 for rev, changefiles in changerevgen():
162 matches = filter(matchfn, changefiles)
166 matches = filter(matchfn, changefiles)
163 if matches:
167 if matches:
164 fncache[rev] = matches
168 fncache[rev] = matches
165 wanted[rev] = 1
169 wanted[rev] = 1
166
170
167 def iterate():
171 def iterate():
168 for i, window in increasing_windows(0, len(revs)):
172 for i, window in increasing_windows(0, len(revs)):
169 yield 'window', revs[0] < revs[-1], revs[-1]
173 yield 'window', revs[0] < revs[-1], revs[-1]
170 nrevs = [rev for rev in revs[i:i+window]
174 nrevs = [rev for rev in revs[i:i+window]
171 if rev in wanted]
175 if rev in wanted]
172 srevs = list(nrevs)
176 srevs = list(nrevs)
173 srevs.sort()
177 srevs.sort()
174 for rev in srevs:
178 for rev in srevs:
175 fns = fncache.get(rev) or filter(matchfn, getchange(rev)[3])
179 fns = fncache.get(rev) or filter(matchfn, getchange(rev)[3])
176 yield 'add', rev, fns
180 yield 'add', rev, fns
177 for rev in nrevs:
181 for rev in nrevs:
178 yield 'iter', rev, None
182 yield 'iter', rev, None
179 return iterate(), getchange, matchfn
183 return iterate(), getchange, matchfn
180
184
181 revrangesep = ':'
185 revrangesep = ':'
182
186
183 def revfix(repo, val, defval):
187 def revfix(repo, val, defval):
184 '''turn user-level id of changeset into rev number.
188 '''turn user-level id of changeset into rev number.
185 user-level id can be tag, changeset, rev number, or negative rev
189 user-level id can be tag, changeset, rev number, or negative rev
186 number relative to number of revs (-1 is tip, etc).'''
190 number relative to number of revs (-1 is tip, etc).'''
187 if not val:
191 if not val:
188 return defval
192 return defval
189 try:
193 try:
190 num = int(val)
194 num = int(val)
191 if str(num) != val:
195 if str(num) != val:
192 raise ValueError
196 raise ValueError
193 if num < 0:
197 if num < 0:
194 num += repo.changelog.count()
198 num += repo.changelog.count()
195 if num < 0:
199 if num < 0:
196 num = 0
200 num = 0
197 elif num >= repo.changelog.count():
201 elif num >= repo.changelog.count():
198 raise ValueError
202 raise ValueError
199 except ValueError:
203 except ValueError:
200 try:
204 try:
201 num = repo.changelog.rev(repo.lookup(val))
205 num = repo.changelog.rev(repo.lookup(val))
202 except KeyError:
206 except KeyError:
203 raise util.Abort(_('invalid revision identifier %s'), val)
207 raise util.Abort(_('invalid revision identifier %s'), val)
204 return num
208 return num
205
209
206 def revpair(ui, repo, revs):
210 def revpair(ui, repo, revs):
207 '''return pair of nodes, given list of revisions. second item can
211 '''return pair of nodes, given list of revisions. second item can
208 be None, meaning use working dir.'''
212 be None, meaning use working dir.'''
209 if not revs:
213 if not revs:
210 return repo.dirstate.parents()[0], None
214 return repo.dirstate.parents()[0], None
211 end = None
215 end = None
212 if len(revs) == 1:
216 if len(revs) == 1:
213 start = revs[0]
217 start = revs[0]
214 if revrangesep in start:
218 if revrangesep in start:
215 start, end = start.split(revrangesep, 1)
219 start, end = start.split(revrangesep, 1)
216 start = revfix(repo, start, 0)
220 start = revfix(repo, start, 0)
217 end = revfix(repo, end, repo.changelog.count() - 1)
221 end = revfix(repo, end, repo.changelog.count() - 1)
218 else:
222 else:
219 start = revfix(repo, start, None)
223 start = revfix(repo, start, None)
220 elif len(revs) == 2:
224 elif len(revs) == 2:
221 if revrangesep in revs[0] or revrangesep in revs[1]:
225 if revrangesep in revs[0] or revrangesep in revs[1]:
222 raise util.Abort(_('too many revisions specified'))
226 raise util.Abort(_('too many revisions specified'))
223 start = revfix(repo, revs[0], None)
227 start = revfix(repo, revs[0], None)
224 end = revfix(repo, revs[1], None)
228 end = revfix(repo, revs[1], None)
225 else:
229 else:
226 raise util.Abort(_('too many revisions specified'))
230 raise util.Abort(_('too many revisions specified'))
227 if end is not None: end = repo.lookup(str(end))
231 if end is not None: end = repo.lookup(str(end))
228 return repo.lookup(str(start)), end
232 return repo.lookup(str(start)), end
229
233
230 def revrange(ui, repo, revs):
234 def revrange(ui, repo, revs):
231 """Yield revision as strings from a list of revision specifications."""
235 """Yield revision as strings from a list of revision specifications."""
232 seen = {}
236 seen = {}
233 for spec in revs:
237 for spec in revs:
234 if revrangesep in spec:
238 if revrangesep in spec:
235 start, end = spec.split(revrangesep, 1)
239 start, end = spec.split(revrangesep, 1)
236 start = revfix(repo, start, 0)
240 start = revfix(repo, start, 0)
237 end = revfix(repo, end, repo.changelog.count() - 1)
241 end = revfix(repo, end, repo.changelog.count() - 1)
238 step = start > end and -1 or 1
242 step = start > end and -1 or 1
239 for rev in xrange(start, end+step, step):
243 for rev in xrange(start, end+step, step):
240 if rev in seen:
244 if rev in seen:
241 continue
245 continue
242 seen[rev] = 1
246 seen[rev] = 1
243 yield str(rev)
247 yield str(rev)
244 else:
248 else:
245 rev = revfix(repo, spec, None)
249 rev = revfix(repo, spec, None)
246 if rev in seen:
250 if rev in seen:
247 continue
251 continue
248 seen[rev] = 1
252 seen[rev] = 1
249 yield str(rev)
253 yield str(rev)
250
254
251 def make_filename(repo, pat, node,
255 def make_filename(repo, pat, node,
252 total=None, seqno=None, revwidth=None, pathname=None):
256 total=None, seqno=None, revwidth=None, pathname=None):
253 node_expander = {
257 node_expander = {
254 'H': lambda: hex(node),
258 'H': lambda: hex(node),
255 'R': lambda: str(repo.changelog.rev(node)),
259 'R': lambda: str(repo.changelog.rev(node)),
256 'h': lambda: short(node),
260 'h': lambda: short(node),
257 }
261 }
258 expander = {
262 expander = {
259 '%': lambda: '%',
263 '%': lambda: '%',
260 'b': lambda: os.path.basename(repo.root),
264 'b': lambda: os.path.basename(repo.root),
261 }
265 }
262
266
263 try:
267 try:
264 if node:
268 if node:
265 expander.update(node_expander)
269 expander.update(node_expander)
266 if node and revwidth is not None:
270 if node and revwidth is not None:
267 expander['r'] = lambda: str(r.rev(node)).zfill(revwidth)
271 expander['r'] = lambda: str(r.rev(node)).zfill(revwidth)
268 if total is not None:
272 if total is not None:
269 expander['N'] = lambda: str(total)
273 expander['N'] = lambda: str(total)
270 if seqno is not None:
274 if seqno is not None:
271 expander['n'] = lambda: str(seqno)
275 expander['n'] = lambda: str(seqno)
272 if total is not None and seqno is not None:
276 if total is not None and seqno is not None:
273 expander['n'] = lambda:str(seqno).zfill(len(str(total)))
277 expander['n'] = lambda:str(seqno).zfill(len(str(total)))
274 if pathname is not None:
278 if pathname is not None:
275 expander['s'] = lambda: os.path.basename(pathname)
279 expander['s'] = lambda: os.path.basename(pathname)
276 expander['d'] = lambda: os.path.dirname(pathname) or '.'
280 expander['d'] = lambda: os.path.dirname(pathname) or '.'
277 expander['p'] = lambda: pathname
281 expander['p'] = lambda: pathname
278
282
279 newname = []
283 newname = []
280 patlen = len(pat)
284 patlen = len(pat)
281 i = 0
285 i = 0
282 while i < patlen:
286 while i < patlen:
283 c = pat[i]
287 c = pat[i]
284 if c == '%':
288 if c == '%':
285 i += 1
289 i += 1
286 c = pat[i]
290 c = pat[i]
287 c = expander[c]()
291 c = expander[c]()
288 newname.append(c)
292 newname.append(c)
289 i += 1
293 i += 1
290 return ''.join(newname)
294 return ''.join(newname)
291 except KeyError, inst:
295 except KeyError, inst:
292 raise util.Abort(_("invalid format spec '%%%s' in output file name"),
296 raise util.Abort(_("invalid format spec '%%%s' in output file name"),
293 inst.args[0])
297 inst.args[0])
294
298
295 def make_file(repo, pat, node=None,
299 def make_file(repo, pat, node=None,
296 total=None, seqno=None, revwidth=None, mode='wb', pathname=None):
300 total=None, seqno=None, revwidth=None, mode='wb', pathname=None):
297 if not pat or pat == '-':
301 if not pat or pat == '-':
298 return 'w' in mode and sys.stdout or sys.stdin
302 return 'w' in mode and sys.stdout or sys.stdin
299 if hasattr(pat, 'write') and 'w' in mode:
303 if hasattr(pat, 'write') and 'w' in mode:
300 return pat
304 return pat
301 if hasattr(pat, 'read') and 'r' in mode:
305 if hasattr(pat, 'read') and 'r' in mode:
302 return pat
306 return pat
303 return open(make_filename(repo, pat, node, total, seqno, revwidth,
307 return open(make_filename(repo, pat, node, total, seqno, revwidth,
304 pathname),
308 pathname),
305 mode)
309 mode)
306
310
307 def write_bundle(cg, filename=None, compress=True):
311 def write_bundle(cg, filename=None, compress=True):
308 """Write a bundle file and return its filename.
312 """Write a bundle file and return its filename.
309
313
310 Existing files will not be overwritten.
314 Existing files will not be overwritten.
311 If no filename is specified, a temporary file is created.
315 If no filename is specified, a temporary file is created.
312 bz2 compression can be turned off.
316 bz2 compression can be turned off.
313 The bundle file will be deleted in case of errors.
317 The bundle file will be deleted in case of errors.
314 """
318 """
315 class nocompress(object):
319 class nocompress(object):
316 def compress(self, x):
320 def compress(self, x):
317 return x
321 return x
318 def flush(self):
322 def flush(self):
319 return ""
323 return ""
320
324
321 fh = None
325 fh = None
322 cleanup = None
326 cleanup = None
323 try:
327 try:
324 if filename:
328 if filename:
325 if os.path.exists(filename):
329 if os.path.exists(filename):
326 raise util.Abort(_("file '%s' already exists"), filename)
330 raise util.Abort(_("file '%s' already exists"), filename)
327 fh = open(filename, "wb")
331 fh = open(filename, "wb")
328 else:
332 else:
329 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
333 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
330 fh = os.fdopen(fd, "wb")
334 fh = os.fdopen(fd, "wb")
331 cleanup = filename
335 cleanup = filename
332
336
333 if compress:
337 if compress:
334 fh.write("HG10")
338 fh.write("HG10")
335 z = bz2.BZ2Compressor(9)
339 z = bz2.BZ2Compressor(9)
336 else:
340 else:
337 fh.write("HG10UN")
341 fh.write("HG10UN")
338 z = nocompress()
342 z = nocompress()
339 # parse the changegroup data, otherwise we will block
343 # parse the changegroup data, otherwise we will block
340 # in case of sshrepo because we don't know the end of the stream
344 # in case of sshrepo because we don't know the end of the stream
341
345
342 # an empty chunkiter is the end of the changegroup
346 # an empty chunkiter is the end of the changegroup
343 empty = False
347 empty = False
344 while not empty:
348 while not empty:
345 empty = True
349 empty = True
346 for chunk in changegroup.chunkiter(cg):
350 for chunk in changegroup.chunkiter(cg):
347 empty = False
351 empty = False
348 fh.write(z.compress(changegroup.genchunk(chunk)))
352 fh.write(z.compress(changegroup.genchunk(chunk)))
349 fh.write(z.compress(changegroup.closechunk()))
353 fh.write(z.compress(changegroup.closechunk()))
350 fh.write(z.flush())
354 fh.write(z.flush())
351 cleanup = None
355 cleanup = None
352 return filename
356 return filename
353 finally:
357 finally:
354 if fh is not None:
358 if fh is not None:
355 fh.close()
359 fh.close()
356 if cleanup is not None:
360 if cleanup is not None:
357 os.unlink(cleanup)
361 os.unlink(cleanup)
358
362
359 def dodiff(fp, ui, repo, node1, node2, files=None, match=util.always,
363 def dodiff(fp, ui, repo, node1, node2, files=None, match=util.always,
360 changes=None, text=False, opts={}):
364 changes=None, text=False, opts={}):
361 if not node1:
365 if not node1:
362 node1 = repo.dirstate.parents()[0]
366 node1 = repo.dirstate.parents()[0]
363 # reading the data for node1 early allows it to play nicely
367 # reading the data for node1 early allows it to play nicely
364 # with repo.changes and the revlog cache.
368 # with repo.changes and the revlog cache.
365 change = repo.changelog.read(node1)
369 change = repo.changelog.read(node1)
366 mmap = repo.manifest.read(change[0])
370 mmap = repo.manifest.read(change[0])
367 date1 = util.datestr(change[2])
371 date1 = util.datestr(change[2])
368
372
369 if not changes:
373 if not changes:
370 changes = repo.changes(node1, node2, files, match=match)
374 changes = repo.changes(node1, node2, files, match=match)
371 modified, added, removed, deleted, unknown = changes
375 modified, added, removed, deleted, unknown = changes
372 if files:
376 if files:
373 modified, added, removed = map(lambda x: filterfiles(files, x),
377 modified, added, removed = map(lambda x: filterfiles(files, x),
374 (modified, added, removed))
378 (modified, added, removed))
375
379
376 if not modified and not added and not removed:
380 if not modified and not added and not removed:
377 return
381 return
378
382
379 if node2:
383 if node2:
380 change = repo.changelog.read(node2)
384 change = repo.changelog.read(node2)
381 mmap2 = repo.manifest.read(change[0])
385 mmap2 = repo.manifest.read(change[0])
382 _date2 = util.datestr(change[2])
386 _date2 = util.datestr(change[2])
383 def date2(f):
387 def date2(f):
384 return _date2
388 return _date2
385 def read(f):
389 def read(f):
386 return repo.file(f).read(mmap2[f])
390 return repo.file(f).read(mmap2[f])
387 else:
391 else:
388 tz = util.makedate()[1]
392 tz = util.makedate()[1]
389 _date2 = util.datestr()
393 _date2 = util.datestr()
390 def date2(f):
394 def date2(f):
391 try:
395 try:
392 return util.datestr((os.lstat(repo.wjoin(f)).st_mtime, tz))
396 return util.datestr((os.lstat(repo.wjoin(f)).st_mtime, tz))
393 except OSError, err:
397 except OSError, err:
394 if err.errno != errno.ENOENT: raise
398 if err.errno != errno.ENOENT: raise
395 return _date2
399 return _date2
396 def read(f):
400 def read(f):
397 return repo.wread(f)
401 return repo.wread(f)
398
402
399 if ui.quiet:
403 if ui.quiet:
400 r = None
404 r = None
401 else:
405 else:
402 hexfunc = ui.verbose and hex or short
406 hexfunc = ui.verbose and hex or short
403 r = [hexfunc(node) for node in [node1, node2] if node]
407 r = [hexfunc(node) for node in [node1, node2] if node]
404
408
405 diffopts = ui.diffopts()
409 diffopts = ui.diffopts()
406 showfunc = opts.get('show_function') or diffopts['showfunc']
410 showfunc = opts.get('show_function') or diffopts['showfunc']
407 ignorews = opts.get('ignore_all_space') or diffopts['ignorews']
411 ignorews = opts.get('ignore_all_space') or diffopts['ignorews']
408 ignorewsamount = opts.get('ignore_space_change') or \
412 ignorewsamount = opts.get('ignore_space_change') or \
409 diffopts['ignorewsamount']
413 diffopts['ignorewsamount']
410 ignoreblanklines = opts.get('ignore_blank_lines') or \
414 ignoreblanklines = opts.get('ignore_blank_lines') or \
411 diffopts['ignoreblanklines']
415 diffopts['ignoreblanklines']
412 for f in modified:
416 for f in modified:
413 to = None
417 to = None
414 if f in mmap:
418 if f in mmap:
415 to = repo.file(f).read(mmap[f])
419 to = repo.file(f).read(mmap[f])
416 tn = read(f)
420 tn = read(f)
417 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
421 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
418 showfunc=showfunc, ignorews=ignorews,
422 showfunc=showfunc, ignorews=ignorews,
419 ignorewsamount=ignorewsamount,
423 ignorewsamount=ignorewsamount,
420 ignoreblanklines=ignoreblanklines))
424 ignoreblanklines=ignoreblanklines))
421 for f in added:
425 for f in added:
422 to = None
426 to = None
423 tn = read(f)
427 tn = read(f)
424 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
428 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
425 showfunc=showfunc, ignorews=ignorews,
429 showfunc=showfunc, ignorews=ignorews,
426 ignorewsamount=ignorewsamount,
430 ignorewsamount=ignorewsamount,
427 ignoreblanklines=ignoreblanklines))
431 ignoreblanklines=ignoreblanklines))
428 for f in removed:
432 for f in removed:
429 to = repo.file(f).read(mmap[f])
433 to = repo.file(f).read(mmap[f])
430 tn = None
434 tn = None
431 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
435 fp.write(mdiff.unidiff(to, date1, tn, date2(f), f, r, text=text,
432 showfunc=showfunc, ignorews=ignorews,
436 showfunc=showfunc, ignorews=ignorews,
433 ignorewsamount=ignorewsamount,
437 ignorewsamount=ignorewsamount,
434 ignoreblanklines=ignoreblanklines))
438 ignoreblanklines=ignoreblanklines))
435
439
436 def trimuser(ui, name, rev, revcache):
440 def trimuser(ui, name, rev, revcache):
437 """trim the name of the user who committed a change"""
441 """trim the name of the user who committed a change"""
438 user = revcache.get(rev)
442 user = revcache.get(rev)
439 if user is None:
443 if user is None:
440 user = revcache[rev] = ui.shortuser(name)
444 user = revcache[rev] = ui.shortuser(name)
441 return user
445 return user
442
446
443 class changeset_printer(object):
447 class changeset_printer(object):
444 '''show changeset information when templating not requested.'''
448 '''show changeset information when templating not requested.'''
445
449
446 def __init__(self, ui, repo):
450 def __init__(self, ui, repo):
447 self.ui = ui
451 self.ui = ui
448 self.repo = repo
452 self.repo = repo
449
453
450 def show(self, rev=0, changenode=None, brinfo=None):
454 def show(self, rev=0, changenode=None, brinfo=None):
451 '''show a single changeset or file revision'''
455 '''show a single changeset or file revision'''
452 log = self.repo.changelog
456 log = self.repo.changelog
453 if changenode is None:
457 if changenode is None:
454 changenode = log.node(rev)
458 changenode = log.node(rev)
455 elif not rev:
459 elif not rev:
456 rev = log.rev(changenode)
460 rev = log.rev(changenode)
457
461
458 if self.ui.quiet:
462 if self.ui.quiet:
459 self.ui.write("%d:%s\n" % (rev, short(changenode)))
463 self.ui.write("%d:%s\n" % (rev, short(changenode)))
460 return
464 return
461
465
462 changes = log.read(changenode)
466 changes = log.read(changenode)
463 date = util.datestr(changes[2])
467 date = util.datestr(changes[2])
464
468
465 parents = [(log.rev(p), self.ui.verbose and hex(p) or short(p))
469 parents = [(log.rev(p), self.ui.verbose and hex(p) or short(p))
466 for p in log.parents(changenode)
470 for p in log.parents(changenode)
467 if self.ui.debugflag or p != nullid]
471 if self.ui.debugflag or p != nullid]
468 if (not self.ui.debugflag and len(parents) == 1 and
472 if (not self.ui.debugflag and len(parents) == 1 and
469 parents[0][0] == rev-1):
473 parents[0][0] == rev-1):
470 parents = []
474 parents = []
471
475
472 if self.ui.verbose:
476 if self.ui.verbose:
473 self.ui.write(_("changeset: %d:%s\n") % (rev, hex(changenode)))
477 self.ui.write(_("changeset: %d:%s\n") % (rev, hex(changenode)))
474 else:
478 else:
475 self.ui.write(_("changeset: %d:%s\n") % (rev, short(changenode)))
479 self.ui.write(_("changeset: %d:%s\n") % (rev, short(changenode)))
476
480
477 for tag in self.repo.nodetags(changenode):
481 for tag in self.repo.nodetags(changenode):
478 self.ui.status(_("tag: %s\n") % tag)
482 self.ui.status(_("tag: %s\n") % tag)
479 for parent in parents:
483 for parent in parents:
480 self.ui.write(_("parent: %d:%s\n") % parent)
484 self.ui.write(_("parent: %d:%s\n") % parent)
481
485
482 if brinfo and changenode in brinfo:
486 if brinfo and changenode in brinfo:
483 br = brinfo[changenode]
487 br = brinfo[changenode]
484 self.ui.write(_("branch: %s\n") % " ".join(br))
488 self.ui.write(_("branch: %s\n") % " ".join(br))
485
489
486 self.ui.debug(_("manifest: %d:%s\n") %
490 self.ui.debug(_("manifest: %d:%s\n") %
487 (self.repo.manifest.rev(changes[0]), hex(changes[0])))
491 (self.repo.manifest.rev(changes[0]), hex(changes[0])))
488 self.ui.status(_("user: %s\n") % changes[1])
492 self.ui.status(_("user: %s\n") % changes[1])
489 self.ui.status(_("date: %s\n") % date)
493 self.ui.status(_("date: %s\n") % date)
490
494
491 if self.ui.debugflag:
495 if self.ui.debugflag:
492 files = self.repo.changes(log.parents(changenode)[0], changenode)
496 files = self.repo.changes(log.parents(changenode)[0], changenode)
493 for key, value in zip([_("files:"), _("files+:"), _("files-:")],
497 for key, value in zip([_("files:"), _("files+:"), _("files-:")],
494 files):
498 files):
495 if value:
499 if value:
496 self.ui.note("%-12s %s\n" % (key, " ".join(value)))
500 self.ui.note("%-12s %s\n" % (key, " ".join(value)))
497 else:
501 else:
498 self.ui.note(_("files: %s\n") % " ".join(changes[3]))
502 self.ui.note(_("files: %s\n") % " ".join(changes[3]))
499
503
500 description = changes[4].strip()
504 description = changes[4].strip()
501 if description:
505 if description:
502 if self.ui.verbose:
506 if self.ui.verbose:
503 self.ui.status(_("description:\n"))
507 self.ui.status(_("description:\n"))
504 self.ui.status(description)
508 self.ui.status(description)
505 self.ui.status("\n\n")
509 self.ui.status("\n\n")
506 else:
510 else:
507 self.ui.status(_("summary: %s\n") %
511 self.ui.status(_("summary: %s\n") %
508 description.splitlines()[0])
512 description.splitlines()[0])
509 self.ui.status("\n")
513 self.ui.status("\n")
510
514
511 def show_changeset(ui, repo, opts):
515 def show_changeset(ui, repo, opts):
512 '''show one changeset. uses template or regular display. caller
516 '''show one changeset. uses template or regular display. caller
513 can pass in 'style' and 'template' options in opts.'''
517 can pass in 'style' and 'template' options in opts.'''
514
518
515 tmpl = opts.get('template')
519 tmpl = opts.get('template')
516 if tmpl:
520 if tmpl:
517 tmpl = templater.parsestring(tmpl, quoted=False)
521 tmpl = templater.parsestring(tmpl, quoted=False)
518 else:
522 else:
519 tmpl = ui.config('ui', 'logtemplate')
523 tmpl = ui.config('ui', 'logtemplate')
520 if tmpl: tmpl = templater.parsestring(tmpl)
524 if tmpl: tmpl = templater.parsestring(tmpl)
521 mapfile = opts.get('style') or ui.config('ui', 'style')
525 mapfile = opts.get('style') or ui.config('ui', 'style')
522 if tmpl or mapfile:
526 if tmpl or mapfile:
523 if mapfile:
527 if mapfile:
524 if not os.path.isfile(mapfile):
528 if not os.path.isfile(mapfile):
525 mapname = templater.templatepath('map-cmdline.' + mapfile)
529 mapname = templater.templatepath('map-cmdline.' + mapfile)
526 if not mapname: mapname = templater.templatepath(mapfile)
530 if not mapname: mapname = templater.templatepath(mapfile)
527 if mapname: mapfile = mapname
531 if mapname: mapfile = mapname
528 try:
532 try:
529 t = templater.changeset_templater(ui, repo, mapfile)
533 t = templater.changeset_templater(ui, repo, mapfile)
530 except SyntaxError, inst:
534 except SyntaxError, inst:
531 raise util.Abort(inst.args[0])
535 raise util.Abort(inst.args[0])
532 if tmpl: t.use_template(tmpl)
536 if tmpl: t.use_template(tmpl)
533 return t
537 return t
534 return changeset_printer(ui, repo)
538 return changeset_printer(ui, repo)
535
539
536 def show_version(ui):
540 def show_version(ui):
537 """output version and copyright information"""
541 """output version and copyright information"""
538 ui.write(_("Mercurial Distributed SCM (version %s)\n")
542 ui.write(_("Mercurial Distributed SCM (version %s)\n")
539 % version.get_version())
543 % version.get_version())
540 ui.status(_(
544 ui.status(_(
541 "\nCopyright (C) 2005 Matt Mackall <mpm@selenic.com>\n"
545 "\nCopyright (C) 2005 Matt Mackall <mpm@selenic.com>\n"
542 "This is free software; see the source for copying conditions. "
546 "This is free software; see the source for copying conditions. "
543 "There is NO\nwarranty; "
547 "There is NO\nwarranty; "
544 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
548 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
545 ))
549 ))
546
550
547 def help_(ui, name=None, with_version=False):
551 def help_(ui, name=None, with_version=False):
548 """show help for a command, extension, or list of commands
552 """show help for a command, extension, or list of commands
549
553
550 With no arguments, print a list of commands and short help.
554 With no arguments, print a list of commands and short help.
551
555
552 Given a command name, print help for that command.
556 Given a command name, print help for that command.
553
557
554 Given an extension name, print help for that extension, and the
558 Given an extension name, print help for that extension, and the
555 commands it provides."""
559 commands it provides."""
556 option_lists = []
560 option_lists = []
557
561
558 def helpcmd(name):
562 def helpcmd(name):
559 if with_version:
563 if with_version:
560 show_version(ui)
564 show_version(ui)
561 ui.write('\n')
565 ui.write('\n')
562 aliases, i = findcmd(name)
566 aliases, i = findcmd(name)
563 # synopsis
567 # synopsis
564 ui.write("%s\n\n" % i[2])
568 ui.write("%s\n\n" % i[2])
565
569
566 # description
570 # description
567 doc = i[0].__doc__
571 doc = i[0].__doc__
568 if not doc:
572 if not doc:
569 doc = _("(No help text available)")
573 doc = _("(No help text available)")
570 if ui.quiet:
574 if ui.quiet:
571 doc = doc.splitlines(0)[0]
575 doc = doc.splitlines(0)[0]
572 ui.write("%s\n" % doc.rstrip())
576 ui.write("%s\n" % doc.rstrip())
573
577
574 if not ui.quiet:
578 if not ui.quiet:
575 # aliases
579 # aliases
576 if len(aliases) > 1:
580 if len(aliases) > 1:
577 ui.write(_("\naliases: %s\n") % ', '.join(aliases[1:]))
581 ui.write(_("\naliases: %s\n") % ', '.join(aliases[1:]))
578
582
579 # options
583 # options
580 if i[1]:
584 if i[1]:
581 option_lists.append(("options", i[1]))
585 option_lists.append(("options", i[1]))
582
586
583 def helplist(select=None):
587 def helplist(select=None):
584 h = {}
588 h = {}
585 cmds = {}
589 cmds = {}
586 for c, e in table.items():
590 for c, e in table.items():
587 f = c.split("|", 1)[0]
591 f = c.split("|", 1)[0]
588 if select and not select(f):
592 if select and not select(f):
589 continue
593 continue
590 if name == "shortlist" and not f.startswith("^"):
594 if name == "shortlist" and not f.startswith("^"):
591 continue
595 continue
592 f = f.lstrip("^")
596 f = f.lstrip("^")
593 if not ui.debugflag and f.startswith("debug"):
597 if not ui.debugflag and f.startswith("debug"):
594 continue
598 continue
595 doc = e[0].__doc__
599 doc = e[0].__doc__
596 if not doc:
600 if not doc:
597 doc = _("(No help text available)")
601 doc = _("(No help text available)")
598 h[f] = doc.splitlines(0)[0].rstrip()
602 h[f] = doc.splitlines(0)[0].rstrip()
599 cmds[f] = c.lstrip("^")
603 cmds[f] = c.lstrip("^")
600
604
601 fns = h.keys()
605 fns = h.keys()
602 fns.sort()
606 fns.sort()
603 m = max(map(len, fns))
607 m = max(map(len, fns))
604 for f in fns:
608 for f in fns:
605 if ui.verbose:
609 if ui.verbose:
606 commands = cmds[f].replace("|",", ")
610 commands = cmds[f].replace("|",", ")
607 ui.write(" %s:\n %s\n"%(commands, h[f]))
611 ui.write(" %s:\n %s\n"%(commands, h[f]))
608 else:
612 else:
609 ui.write(' %-*s %s\n' % (m, f, h[f]))
613 ui.write(' %-*s %s\n' % (m, f, h[f]))
610
614
611 def helpext(name):
615 def helpext(name):
612 try:
616 try:
613 mod = findext(name)
617 mod = findext(name)
614 except KeyError:
618 except KeyError:
615 raise UnknownCommand(name)
619 raise UnknownCommand(name)
616
620
617 doc = (mod.__doc__ or _('No help text available')).splitlines(0)
621 doc = (mod.__doc__ or _('No help text available')).splitlines(0)
618 ui.write(_('%s extension - %s\n') % (name.split('.')[-1], doc[0]))
622 ui.write(_('%s extension - %s\n') % (name.split('.')[-1], doc[0]))
619 for d in doc[1:]:
623 for d in doc[1:]:
620 ui.write(d, '\n')
624 ui.write(d, '\n')
621
625
622 ui.status('\n')
626 ui.status('\n')
623 if ui.verbose:
627 if ui.verbose:
624 ui.status(_('list of commands:\n\n'))
628 ui.status(_('list of commands:\n\n'))
625 else:
629 else:
626 ui.status(_('list of commands (use "hg help -v %s" '
630 ui.status(_('list of commands (use "hg help -v %s" '
627 'to show aliases and global options):\n\n') % name)
631 'to show aliases and global options):\n\n') % name)
628
632
629 modcmds = dict.fromkeys([c.split('|', 1)[0] for c in mod.cmdtable])
633 modcmds = dict.fromkeys([c.split('|', 1)[0] for c in mod.cmdtable])
630 helplist(modcmds.has_key)
634 helplist(modcmds.has_key)
631
635
632 if name and name != 'shortlist':
636 if name and name != 'shortlist':
633 try:
637 try:
634 helpcmd(name)
638 helpcmd(name)
635 except UnknownCommand:
639 except UnknownCommand:
636 helpext(name)
640 helpext(name)
637
641
638 else:
642 else:
639 # program name
643 # program name
640 if ui.verbose or with_version:
644 if ui.verbose or with_version:
641 show_version(ui)
645 show_version(ui)
642 else:
646 else:
643 ui.status(_("Mercurial Distributed SCM\n"))
647 ui.status(_("Mercurial Distributed SCM\n"))
644 ui.status('\n')
648 ui.status('\n')
645
649
646 # list of commands
650 # list of commands
647 if name == "shortlist":
651 if name == "shortlist":
648 ui.status(_('basic commands (use "hg help" '
652 ui.status(_('basic commands (use "hg help" '
649 'for the full list or option "-v" for details):\n\n'))
653 'for the full list or option "-v" for details):\n\n'))
650 elif ui.verbose:
654 elif ui.verbose:
651 ui.status(_('list of commands:\n\n'))
655 ui.status(_('list of commands:\n\n'))
652 else:
656 else:
653 ui.status(_('list of commands (use "hg help -v" '
657 ui.status(_('list of commands (use "hg help -v" '
654 'to show aliases and global options):\n\n'))
658 'to show aliases and global options):\n\n'))
655
659
656 helplist()
660 helplist()
657
661
658 # global options
662 # global options
659 if ui.verbose:
663 if ui.verbose:
660 option_lists.append(("global options", globalopts))
664 option_lists.append(("global options", globalopts))
661
665
662 # list all option lists
666 # list all option lists
663 opt_output = []
667 opt_output = []
664 for title, options in option_lists:
668 for title, options in option_lists:
665 opt_output.append(("\n%s:\n" % title, None))
669 opt_output.append(("\n%s:\n" % title, None))
666 for shortopt, longopt, default, desc in options:
670 for shortopt, longopt, default, desc in options:
667 opt_output.append(("%2s%s" % (shortopt and "-%s" % shortopt,
671 opt_output.append(("%2s%s" % (shortopt and "-%s" % shortopt,
668 longopt and " --%s" % longopt),
672 longopt and " --%s" % longopt),
669 "%s%s" % (desc,
673 "%s%s" % (desc,
670 default
674 default
671 and _(" (default: %s)") % default
675 and _(" (default: %s)") % default
672 or "")))
676 or "")))
673
677
674 if opt_output:
678 if opt_output:
675 opts_len = max([len(line[0]) for line in opt_output if line[1]])
679 opts_len = max([len(line[0]) for line in opt_output if line[1]])
676 for first, second in opt_output:
680 for first, second in opt_output:
677 if second:
681 if second:
678 ui.write(" %-*s %s\n" % (opts_len, first, second))
682 ui.write(" %-*s %s\n" % (opts_len, first, second))
679 else:
683 else:
680 ui.write("%s\n" % first)
684 ui.write("%s\n" % first)
681
685
682 # Commands start here, listed alphabetically
686 # Commands start here, listed alphabetically
683
687
684 def add(ui, repo, *pats, **opts):
688 def add(ui, repo, *pats, **opts):
685 """add the specified files on the next commit
689 """add the specified files on the next commit
686
690
687 Schedule files to be version controlled and added to the repository.
691 Schedule files to be version controlled and added to the repository.
688
692
689 The files will be added to the repository at the next commit.
693 The files will be added to the repository at the next commit.
690
694
691 If no names are given, add all files in the repository.
695 If no names are given, add all files in the repository.
692 """
696 """
693
697
694 names = []
698 names = []
695 for src, abs, rel, exact in walk(repo, pats, opts):
699 for src, abs, rel, exact in walk(repo, pats, opts):
696 if exact:
700 if exact:
697 if ui.verbose:
701 if ui.verbose:
698 ui.status(_('adding %s\n') % rel)
702 ui.status(_('adding %s\n') % rel)
699 names.append(abs)
703 names.append(abs)
700 elif repo.dirstate.state(abs) == '?':
704 elif repo.dirstate.state(abs) == '?':
701 ui.status(_('adding %s\n') % rel)
705 ui.status(_('adding %s\n') % rel)
702 names.append(abs)
706 names.append(abs)
703 if not opts.get('dry_run'):
707 if not opts.get('dry_run'):
704 repo.add(names)
708 repo.add(names)
705
709
706 def addremove(ui, repo, *pats, **opts):
710 def addremove(ui, repo, *pats, **opts):
707 """add all new files, delete all missing files (DEPRECATED)
711 """add all new files, delete all missing files (DEPRECATED)
708
712
709 (DEPRECATED)
713 (DEPRECATED)
710 Add all new files and remove all missing files from the repository.
714 Add all new files and remove all missing files from the repository.
711
715
712 New files are ignored if they match any of the patterns in .hgignore. As
716 New files are ignored if they match any of the patterns in .hgignore. As
713 with add, these changes take effect at the next commit.
717 with add, these changes take effect at the next commit.
714
718
715 This command is now deprecated and will be removed in a future
719 This command is now deprecated and will be removed in a future
716 release. Please use add and remove --after instead.
720 release. Please use add and remove --after instead.
717 """
721 """
718 ui.warn(_('(the addremove command is deprecated; use add and remove '
722 ui.warn(_('(the addremove command is deprecated; use add and remove '
719 '--after instead)\n'))
723 '--after instead)\n'))
720 return addremove_lock(ui, repo, pats, opts)
724 return addremove_lock(ui, repo, pats, opts)
721
725
722 def addremove_lock(ui, repo, pats, opts, wlock=None):
726 def addremove_lock(ui, repo, pats, opts, wlock=None):
723 add, remove = [], []
727 add, remove = [], []
724 for src, abs, rel, exact in walk(repo, pats, opts):
728 for src, abs, rel, exact in walk(repo, pats, opts):
725 if src == 'f' and repo.dirstate.state(abs) == '?':
729 if src == 'f' and repo.dirstate.state(abs) == '?':
726 add.append(abs)
730 add.append(abs)
727 if ui.verbose or not exact:
731 if ui.verbose or not exact:
728 ui.status(_('adding %s\n') % ((pats and rel) or abs))
732 ui.status(_('adding %s\n') % ((pats and rel) or abs))
729 if repo.dirstate.state(abs) != 'r' and not os.path.exists(rel):
733 if repo.dirstate.state(abs) != 'r' and not os.path.exists(rel):
730 remove.append(abs)
734 remove.append(abs)
731 if ui.verbose or not exact:
735 if ui.verbose or not exact:
732 ui.status(_('removing %s\n') % ((pats and rel) or abs))
736 ui.status(_('removing %s\n') % ((pats and rel) or abs))
733 if not opts.get('dry_run'):
737 if not opts.get('dry_run'):
734 repo.add(add, wlock=wlock)
738 repo.add(add, wlock=wlock)
735 repo.remove(remove, wlock=wlock)
739 repo.remove(remove, wlock=wlock)
736
740
737 def annotate(ui, repo, *pats, **opts):
741 def annotate(ui, repo, *pats, **opts):
738 """show changeset information per file line
742 """show changeset information per file line
739
743
740 List changes in files, showing the revision id responsible for each line
744 List changes in files, showing the revision id responsible for each line
741
745
742 This command is useful to discover who did a change or when a change took
746 This command is useful to discover who did a change or when a change took
743 place.
747 place.
744
748
745 Without the -a option, annotate will avoid processing files it
749 Without the -a option, annotate will avoid processing files it
746 detects as binary. With -a, annotate will generate an annotation
750 detects as binary. With -a, annotate will generate an annotation
747 anyway, probably with undesirable results.
751 anyway, probably with undesirable results.
748 """
752 """
749 def getnode(rev):
753 def getnode(rev):
750 return short(repo.changelog.node(rev))
754 return short(repo.changelog.node(rev))
751
755
752 ucache = {}
756 ucache = {}
753 def getname(rev):
757 def getname(rev):
754 try:
758 try:
755 return ucache[rev]
759 return ucache[rev]
756 except:
760 except:
757 u = trimuser(ui, repo.changectx(rev).user(), rev, ucache)
761 u = trimuser(ui, repo.changectx(rev).user(), rev, ucache)
758 ucache[rev] = u
762 ucache[rev] = u
759 return u
763 return u
760
764
761 dcache = {}
765 dcache = {}
762 def getdate(rev):
766 def getdate(rev):
763 datestr = dcache.get(rev)
767 datestr = dcache.get(rev)
764 if datestr is None:
768 if datestr is None:
765 datestr = dcache[rev] = util.datestr(repo.changectx(rev).date())
769 datestr = dcache[rev] = util.datestr(repo.changectx(rev).date())
766 return datestr
770 return datestr
767
771
768 if not pats:
772 if not pats:
769 raise util.Abort(_('at least one file name or pattern required'))
773 raise util.Abort(_('at least one file name or pattern required'))
770
774
771 opmap = [['user', getname], ['number', str], ['changeset', getnode],
775 opmap = [['user', getname], ['number', str], ['changeset', getnode],
772 ['date', getdate]]
776 ['date', getdate]]
773 if not opts['user'] and not opts['changeset'] and not opts['date']:
777 if not opts['user'] and not opts['changeset'] and not opts['date']:
774 opts['number'] = 1
778 opts['number'] = 1
775
779
776 ctx = repo.changectx(opts['rev'] or repo.dirstate.parents()[0])
780 ctx = repo.changectx(opts['rev'] or repo.dirstate.parents()[0])
777
781
778 for src, abs, rel, exact in walk(repo, pats, opts, node=ctx.node()):
782 for src, abs, rel, exact in walk(repo, pats, opts, node=ctx.node()):
779 fctx = ctx.filectx(abs)
783 fctx = ctx.filectx(abs)
780 if not opts['text'] and util.binary(fctx.data()):
784 if not opts['text'] and util.binary(fctx.data()):
781 ui.write(_("%s: binary file\n") % ((pats and rel) or abs))
785 ui.write(_("%s: binary file\n") % ((pats and rel) or abs))
782 continue
786 continue
783
787
784 lines = fctx.annotate()
788 lines = fctx.annotate()
785 pieces = []
789 pieces = []
786
790
787 for o, f in opmap:
791 for o, f in opmap:
788 if opts[o]:
792 if opts[o]:
789 l = [f(n) for n, dummy in lines]
793 l = [f(n) for n, dummy in lines]
790 if l:
794 if l:
791 m = max(map(len, l))
795 m = max(map(len, l))
792 pieces.append(["%*s" % (m, x) for x in l])
796 pieces.append(["%*s" % (m, x) for x in l])
793
797
794 if pieces:
798 if pieces:
795 for p, l in zip(zip(*pieces), lines):
799 for p, l in zip(zip(*pieces), lines):
796 ui.write("%s: %s" % (" ".join(p), l[1]))
800 ui.write("%s: %s" % (" ".join(p), l[1]))
797
801
798 def archive(ui, repo, dest, **opts):
802 def archive(ui, repo, dest, **opts):
799 '''create unversioned archive of a repository revision
803 '''create unversioned archive of a repository revision
800
804
801 By default, the revision used is the parent of the working
805 By default, the revision used is the parent of the working
802 directory; use "-r" to specify a different revision.
806 directory; use "-r" to specify a different revision.
803
807
804 To specify the type of archive to create, use "-t". Valid
808 To specify the type of archive to create, use "-t". Valid
805 types are:
809 types are:
806
810
807 "files" (default): a directory full of files
811 "files" (default): a directory full of files
808 "tar": tar archive, uncompressed
812 "tar": tar archive, uncompressed
809 "tbz2": tar archive, compressed using bzip2
813 "tbz2": tar archive, compressed using bzip2
810 "tgz": tar archive, compressed using gzip
814 "tgz": tar archive, compressed using gzip
811 "uzip": zip archive, uncompressed
815 "uzip": zip archive, uncompressed
812 "zip": zip archive, compressed using deflate
816 "zip": zip archive, compressed using deflate
813
817
814 The exact name of the destination archive or directory is given
818 The exact name of the destination archive or directory is given
815 using a format string; see "hg help export" for details.
819 using a format string; see "hg help export" for details.
816
820
817 Each member added to an archive file has a directory prefix
821 Each member added to an archive file has a directory prefix
818 prepended. Use "-p" to specify a format string for the prefix.
822 prepended. Use "-p" to specify a format string for the prefix.
819 The default is the basename of the archive, with suffixes removed.
823 The default is the basename of the archive, with suffixes removed.
820 '''
824 '''
821
825
822 if opts['rev']:
826 if opts['rev']:
823 node = repo.lookup(opts['rev'])
827 node = repo.lookup(opts['rev'])
824 else:
828 else:
825 node, p2 = repo.dirstate.parents()
829 node, p2 = repo.dirstate.parents()
826 if p2 != nullid:
830 if p2 != nullid:
827 raise util.Abort(_('uncommitted merge - please provide a '
831 raise util.Abort(_('uncommitted merge - please provide a '
828 'specific revision'))
832 'specific revision'))
829
833
830 dest = make_filename(repo, dest, node)
834 dest = make_filename(repo, dest, node)
831 if os.path.realpath(dest) == repo.root:
835 if os.path.realpath(dest) == repo.root:
832 raise util.Abort(_('repository root cannot be destination'))
836 raise util.Abort(_('repository root cannot be destination'))
833 dummy, matchfn, dummy = matchpats(repo, [], opts)
837 dummy, matchfn, dummy = matchpats(repo, [], opts)
834 kind = opts.get('type') or 'files'
838 kind = opts.get('type') or 'files'
835 prefix = opts['prefix']
839 prefix = opts['prefix']
836 if dest == '-':
840 if dest == '-':
837 if kind == 'files':
841 if kind == 'files':
838 raise util.Abort(_('cannot archive plain files to stdout'))
842 raise util.Abort(_('cannot archive plain files to stdout'))
839 dest = sys.stdout
843 dest = sys.stdout
840 if not prefix: prefix = os.path.basename(repo.root) + '-%h'
844 if not prefix: prefix = os.path.basename(repo.root) + '-%h'
841 prefix = make_filename(repo, prefix, node)
845 prefix = make_filename(repo, prefix, node)
842 archival.archive(repo, dest, node, kind, not opts['no_decode'],
846 archival.archive(repo, dest, node, kind, not opts['no_decode'],
843 matchfn, prefix)
847 matchfn, prefix)
844
848
845 def backout(ui, repo, rev, **opts):
849 def backout(ui, repo, rev, **opts):
846 '''reverse effect of earlier changeset
850 '''reverse effect of earlier changeset
847
851
848 Commit the backed out changes as a new changeset. The new
852 Commit the backed out changes as a new changeset. The new
849 changeset is a child of the backed out changeset.
853 changeset is a child of the backed out changeset.
850
854
851 If you back out a changeset other than the tip, a new head is
855 If you back out a changeset other than the tip, a new head is
852 created. This head is the parent of the working directory. If
856 created. This head is the parent of the working directory. If
853 you back out an old changeset, your working directory will appear
857 you back out an old changeset, your working directory will appear
854 old after the backout. You should merge the backout changeset
858 old after the backout. You should merge the backout changeset
855 with another head.
859 with another head.
856
860
857 The --merge option remembers the parent of the working directory
861 The --merge option remembers the parent of the working directory
858 before starting the backout, then merges the new head with that
862 before starting the backout, then merges the new head with that
859 changeset afterwards. This saves you from doing the merge by
863 changeset afterwards. This saves you from doing the merge by
860 hand. The result of this merge is not committed, as for a normal
864 hand. The result of this merge is not committed, as for a normal
861 merge.'''
865 merge.'''
862
866
863 bail_if_changed(repo)
867 bail_if_changed(repo)
864 op1, op2 = repo.dirstate.parents()
868 op1, op2 = repo.dirstate.parents()
865 if op2 != nullid:
869 if op2 != nullid:
866 raise util.Abort(_('outstanding uncommitted merge'))
870 raise util.Abort(_('outstanding uncommitted merge'))
867 node = repo.lookup(rev)
871 node = repo.lookup(rev)
868 p1, p2 = repo.changelog.parents(node)
872 p1, p2 = repo.changelog.parents(node)
869 if p1 == nullid:
873 if p1 == nullid:
870 raise util.Abort(_('cannot back out a change with no parents'))
874 raise util.Abort(_('cannot back out a change with no parents'))
871 if p2 != nullid:
875 if p2 != nullid:
872 if not opts['parent']:
876 if not opts['parent']:
873 raise util.Abort(_('cannot back out a merge changeset without '
877 raise util.Abort(_('cannot back out a merge changeset without '
874 '--parent'))
878 '--parent'))
875 p = repo.lookup(opts['parent'])
879 p = repo.lookup(opts['parent'])
876 if p not in (p1, p2):
880 if p not in (p1, p2):
877 raise util.Abort(_('%s is not a parent of %s' %
881 raise util.Abort(_('%s is not a parent of %s' %
878 (short(p), short(node))))
882 (short(p), short(node))))
879 parent = p
883 parent = p
880 else:
884 else:
881 if opts['parent']:
885 if opts['parent']:
882 raise util.Abort(_('cannot use --parent on non-merge changeset'))
886 raise util.Abort(_('cannot use --parent on non-merge changeset'))
883 parent = p1
887 parent = p1
884 repo.update(node, force=True, show_stats=False)
888 repo.update(node, force=True, show_stats=False)
885 revert_opts = opts.copy()
889 revert_opts = opts.copy()
886 revert_opts['rev'] = hex(parent)
890 revert_opts['rev'] = hex(parent)
887 revert(ui, repo, **revert_opts)
891 revert(ui, repo, **revert_opts)
888 commit_opts = opts.copy()
892 commit_opts = opts.copy()
889 commit_opts['addremove'] = False
893 commit_opts['addremove'] = False
890 if not commit_opts['message'] and not commit_opts['logfile']:
894 if not commit_opts['message'] and not commit_opts['logfile']:
891 commit_opts['message'] = _("Backed out changeset %s") % (hex(node))
895 commit_opts['message'] = _("Backed out changeset %s") % (hex(node))
892 commit_opts['force_editor'] = True
896 commit_opts['force_editor'] = True
893 commit(ui, repo, **commit_opts)
897 commit(ui, repo, **commit_opts)
894 def nice(node):
898 def nice(node):
895 return '%d:%s' % (repo.changelog.rev(node), short(node))
899 return '%d:%s' % (repo.changelog.rev(node), short(node))
896 ui.status(_('changeset %s backs out changeset %s\n') %
900 ui.status(_('changeset %s backs out changeset %s\n') %
897 (nice(repo.changelog.tip()), nice(node)))
901 (nice(repo.changelog.tip()), nice(node)))
898 if op1 != node:
902 if op1 != node:
899 if opts['merge']:
903 if opts['merge']:
900 ui.status(_('merging with changeset %s\n') % nice(op1))
904 ui.status(_('merging with changeset %s\n') % nice(op1))
901 doupdate(ui, repo, hex(op1), **opts)
905 doupdate(ui, repo, hex(op1), **opts)
902 else:
906 else:
903 ui.status(_('the backout changeset is a new head - '
907 ui.status(_('the backout changeset is a new head - '
904 'do not forget to merge\n'))
908 'do not forget to merge\n'))
905 ui.status(_('(use "backout -m" if you want to auto-merge)\n'))
909 ui.status(_('(use "backout -m" if you want to auto-merge)\n'))
906
910
907 def bundle(ui, repo, fname, dest=None, **opts):
911 def bundle(ui, repo, fname, dest=None, **opts):
908 """create a changegroup file
912 """create a changegroup file
909
913
910 Generate a compressed changegroup file collecting all changesets
914 Generate a compressed changegroup file collecting all changesets
911 not found in the other repository.
915 not found in the other repository.
912
916
913 This file can then be transferred using conventional means and
917 This file can then be transferred using conventional means and
914 applied to another repository with the unbundle command. This is
918 applied to another repository with the unbundle command. This is
915 useful when native push and pull are not available or when
919 useful when native push and pull are not available or when
916 exporting an entire repository is undesirable. The standard file
920 exporting an entire repository is undesirable. The standard file
917 extension is ".hg".
921 extension is ".hg".
918
922
919 Unlike import/export, this exactly preserves all changeset
923 Unlike import/export, this exactly preserves all changeset
920 contents including permissions, rename data, and revision history.
924 contents including permissions, rename data, and revision history.
921 """
925 """
922 dest = ui.expandpath(dest or 'default-push', dest or 'default')
926 dest = ui.expandpath(dest or 'default-push', dest or 'default')
923 other = hg.repository(ui, dest)
927 other = hg.repository(ui, dest)
924 o = repo.findoutgoing(other, force=opts['force'])
928 o = repo.findoutgoing(other, force=opts['force'])
925 cg = repo.changegroup(o, 'bundle')
929 cg = repo.changegroup(o, 'bundle')
926 write_bundle(cg, fname)
930 write_bundle(cg, fname)
927
931
928 def cat(ui, repo, file1, *pats, **opts):
932 def cat(ui, repo, file1, *pats, **opts):
929 """output the latest or given revisions of files
933 """output the latest or given revisions of files
930
934
931 Print the specified files as they were at the given revision.
935 Print the specified files as they were at the given revision.
932 If no revision is given then the tip is used.
936 If no revision is given then the tip is used.
933
937
934 Output may be to a file, in which case the name of the file is
938 Output may be to a file, in which case the name of the file is
935 given using a format string. The formatting rules are the same as
939 given using a format string. The formatting rules are the same as
936 for the export command, with the following additions:
940 for the export command, with the following additions:
937
941
938 %s basename of file being printed
942 %s basename of file being printed
939 %d dirname of file being printed, or '.' if in repo root
943 %d dirname of file being printed, or '.' if in repo root
940 %p root-relative path name of file being printed
944 %p root-relative path name of file being printed
941 """
945 """
942 ctx = repo.changectx(opts['rev'] or -1)
946 ctx = repo.changectx(opts['rev'] or -1)
943 for src, abs, rel, exact in walk(repo, (file1,) + pats, opts, ctx.node()):
947 for src, abs, rel, exact in walk(repo, (file1,) + pats, opts, ctx.node()):
944 fp = make_file(repo, opts['output'], ctx.node(), pathname=abs)
948 fp = make_file(repo, opts['output'], ctx.node(), pathname=abs)
945 fp.write(ctx.filectx(abs).data())
949 fp.write(ctx.filectx(abs).data())
946
950
947 def clone(ui, source, dest=None, **opts):
951 def clone(ui, source, dest=None, **opts):
948 """make a copy of an existing repository
952 """make a copy of an existing repository
949
953
950 Create a copy of an existing repository in a new directory.
954 Create a copy of an existing repository in a new directory.
951
955
952 If no destination directory name is specified, it defaults to the
956 If no destination directory name is specified, it defaults to the
953 basename of the source.
957 basename of the source.
954
958
955 The location of the source is added to the new repository's
959 The location of the source is added to the new repository's
956 .hg/hgrc file, as the default to be used for future pulls.
960 .hg/hgrc file, as the default to be used for future pulls.
957
961
958 For efficiency, hardlinks are used for cloning whenever the source
962 For efficiency, hardlinks are used for cloning whenever the source
959 and destination are on the same filesystem. Some filesystems,
963 and destination are on the same filesystem. Some filesystems,
960 such as AFS, implement hardlinking incorrectly, but do not report
964 such as AFS, implement hardlinking incorrectly, but do not report
961 errors. In these cases, use the --pull option to avoid
965 errors. In these cases, use the --pull option to avoid
962 hardlinking.
966 hardlinking.
963
967
964 See pull for valid source format details.
968 See pull for valid source format details.
965
969
966 It is possible to specify an ssh:// URL as the destination, but no
970 It is possible to specify an ssh:// URL as the destination, but no
967 .hg/hgrc will be created on the remote side. Look at the help text
971 .hg/hgrc will be created on the remote side. Look at the help text
968 for the pull command for important details about ssh:// URLs.
972 for the pull command for important details about ssh:// URLs.
969 """
973 """
970 ui.setconfig_remoteopts(**opts)
974 ui.setconfig_remoteopts(**opts)
971 hg.clone(ui, ui.expandpath(source), dest,
975 hg.clone(ui, ui.expandpath(source), dest,
972 pull=opts['pull'],
976 pull=opts['pull'],
973 stream=opts['stream'],
977 stream=opts['uncompressed'],
974 rev=opts['rev'],
978 rev=opts['rev'],
975 update=not opts['noupdate'])
979 update=not opts['noupdate'])
976
980
977 def commit(ui, repo, *pats, **opts):
981 def commit(ui, repo, *pats, **opts):
978 """commit the specified files or all outstanding changes
982 """commit the specified files or all outstanding changes
979
983
980 Commit changes to the given files into the repository.
984 Commit changes to the given files into the repository.
981
985
982 If a list of files is omitted, all changes reported by "hg status"
986 If a list of files is omitted, all changes reported by "hg status"
983 will be committed.
987 will be committed.
984
988
985 If no commit message is specified, the editor configured in your hgrc
989 If no commit message is specified, the editor configured in your hgrc
986 or in the EDITOR environment variable is started to enter a message.
990 or in the EDITOR environment variable is started to enter a message.
987 """
991 """
988 message = opts['message']
992 message = opts['message']
989 logfile = opts['logfile']
993 logfile = opts['logfile']
990
994
991 if message and logfile:
995 if message and logfile:
992 raise util.Abort(_('options --message and --logfile are mutually '
996 raise util.Abort(_('options --message and --logfile are mutually '
993 'exclusive'))
997 'exclusive'))
994 if not message and logfile:
998 if not message and logfile:
995 try:
999 try:
996 if logfile == '-':
1000 if logfile == '-':
997 message = sys.stdin.read()
1001 message = sys.stdin.read()
998 else:
1002 else:
999 message = open(logfile).read()
1003 message = open(logfile).read()
1000 except IOError, inst:
1004 except IOError, inst:
1001 raise util.Abort(_("can't read commit message '%s': %s") %
1005 raise util.Abort(_("can't read commit message '%s': %s") %
1002 (logfile, inst.strerror))
1006 (logfile, inst.strerror))
1003
1007
1004 if opts['addremove']:
1008 if opts['addremove']:
1005 addremove_lock(ui, repo, pats, opts)
1009 addremove_lock(ui, repo, pats, opts)
1006 fns, match, anypats = matchpats(repo, pats, opts)
1010 fns, match, anypats = matchpats(repo, pats, opts)
1007 if pats:
1011 if pats:
1008 modified, added, removed, deleted, unknown = (
1012 modified, added, removed, deleted, unknown = (
1009 repo.changes(files=fns, match=match))
1013 repo.changes(files=fns, match=match))
1010 files = modified + added + removed
1014 files = modified + added + removed
1011 else:
1015 else:
1012 files = []
1016 files = []
1013 try:
1017 try:
1014 repo.commit(files, message, opts['user'], opts['date'], match,
1018 repo.commit(files, message, opts['user'], opts['date'], match,
1015 force_editor=opts.get('force_editor'))
1019 force_editor=opts.get('force_editor'))
1016 except ValueError, inst:
1020 except ValueError, inst:
1017 raise util.Abort(str(inst))
1021 raise util.Abort(str(inst))
1018
1022
1019 def docopy(ui, repo, pats, opts, wlock):
1023 def docopy(ui, repo, pats, opts, wlock):
1020 # called with the repo lock held
1024 # called with the repo lock held
1021 cwd = repo.getcwd()
1025 cwd = repo.getcwd()
1022 errors = 0
1026 errors = 0
1023 copied = []
1027 copied = []
1024 targets = {}
1028 targets = {}
1025
1029
1026 def okaytocopy(abs, rel, exact):
1030 def okaytocopy(abs, rel, exact):
1027 reasons = {'?': _('is not managed'),
1031 reasons = {'?': _('is not managed'),
1028 'a': _('has been marked for add'),
1032 'a': _('has been marked for add'),
1029 'r': _('has been marked for remove')}
1033 'r': _('has been marked for remove')}
1030 state = repo.dirstate.state(abs)
1034 state = repo.dirstate.state(abs)
1031 reason = reasons.get(state)
1035 reason = reasons.get(state)
1032 if reason:
1036 if reason:
1033 if state == 'a':
1037 if state == 'a':
1034 origsrc = repo.dirstate.copied(abs)
1038 origsrc = repo.dirstate.copied(abs)
1035 if origsrc is not None:
1039 if origsrc is not None:
1036 return origsrc
1040 return origsrc
1037 if exact:
1041 if exact:
1038 ui.warn(_('%s: not copying - file %s\n') % (rel, reason))
1042 ui.warn(_('%s: not copying - file %s\n') % (rel, reason))
1039 else:
1043 else:
1040 return abs
1044 return abs
1041
1045
1042 def copy(origsrc, abssrc, relsrc, target, exact):
1046 def copy(origsrc, abssrc, relsrc, target, exact):
1043 abstarget = util.canonpath(repo.root, cwd, target)
1047 abstarget = util.canonpath(repo.root, cwd, target)
1044 reltarget = util.pathto(cwd, abstarget)
1048 reltarget = util.pathto(cwd, abstarget)
1045 prevsrc = targets.get(abstarget)
1049 prevsrc = targets.get(abstarget)
1046 if prevsrc is not None:
1050 if prevsrc is not None:
1047 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1051 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1048 (reltarget, abssrc, prevsrc))
1052 (reltarget, abssrc, prevsrc))
1049 return
1053 return
1050 if (not opts['after'] and os.path.exists(reltarget) or
1054 if (not opts['after'] and os.path.exists(reltarget) or
1051 opts['after'] and repo.dirstate.state(abstarget) not in '?r'):
1055 opts['after'] and repo.dirstate.state(abstarget) not in '?r'):
1052 if not opts['force']:
1056 if not opts['force']:
1053 ui.warn(_('%s: not overwriting - file exists\n') %
1057 ui.warn(_('%s: not overwriting - file exists\n') %
1054 reltarget)
1058 reltarget)
1055 return
1059 return
1056 if not opts['after'] and not opts.get('dry_run'):
1060 if not opts['after'] and not opts.get('dry_run'):
1057 os.unlink(reltarget)
1061 os.unlink(reltarget)
1058 if opts['after']:
1062 if opts['after']:
1059 if not os.path.exists(reltarget):
1063 if not os.path.exists(reltarget):
1060 return
1064 return
1061 else:
1065 else:
1062 targetdir = os.path.dirname(reltarget) or '.'
1066 targetdir = os.path.dirname(reltarget) or '.'
1063 if not os.path.isdir(targetdir) and not opts.get('dry_run'):
1067 if not os.path.isdir(targetdir) and not opts.get('dry_run'):
1064 os.makedirs(targetdir)
1068 os.makedirs(targetdir)
1065 try:
1069 try:
1066 restore = repo.dirstate.state(abstarget) == 'r'
1070 restore = repo.dirstate.state(abstarget) == 'r'
1067 if restore and not opts.get('dry_run'):
1071 if restore and not opts.get('dry_run'):
1068 repo.undelete([abstarget], wlock)
1072 repo.undelete([abstarget], wlock)
1069 try:
1073 try:
1070 if not opts.get('dry_run'):
1074 if not opts.get('dry_run'):
1071 shutil.copyfile(relsrc, reltarget)
1075 shutil.copyfile(relsrc, reltarget)
1072 shutil.copymode(relsrc, reltarget)
1076 shutil.copymode(relsrc, reltarget)
1073 restore = False
1077 restore = False
1074 finally:
1078 finally:
1075 if restore:
1079 if restore:
1076 repo.remove([abstarget], wlock)
1080 repo.remove([abstarget], wlock)
1077 except shutil.Error, inst:
1081 except shutil.Error, inst:
1078 raise util.Abort(str(inst))
1082 raise util.Abort(str(inst))
1079 except IOError, inst:
1083 except IOError, inst:
1080 if inst.errno == errno.ENOENT:
1084 if inst.errno == errno.ENOENT:
1081 ui.warn(_('%s: deleted in working copy\n') % relsrc)
1085 ui.warn(_('%s: deleted in working copy\n') % relsrc)
1082 else:
1086 else:
1083 ui.warn(_('%s: cannot copy - %s\n') %
1087 ui.warn(_('%s: cannot copy - %s\n') %
1084 (relsrc, inst.strerror))
1088 (relsrc, inst.strerror))
1085 errors += 1
1089 errors += 1
1086 return
1090 return
1087 if ui.verbose or not exact:
1091 if ui.verbose or not exact:
1088 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1092 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1089 targets[abstarget] = abssrc
1093 targets[abstarget] = abssrc
1090 if abstarget != origsrc and not opts.get('dry_run'):
1094 if abstarget != origsrc and not opts.get('dry_run'):
1091 repo.copy(origsrc, abstarget, wlock)
1095 repo.copy(origsrc, abstarget, wlock)
1092 copied.append((abssrc, relsrc, exact))
1096 copied.append((abssrc, relsrc, exact))
1093
1097
1094 def targetpathfn(pat, dest, srcs):
1098 def targetpathfn(pat, dest, srcs):
1095 if os.path.isdir(pat):
1099 if os.path.isdir(pat):
1096 abspfx = util.canonpath(repo.root, cwd, pat)
1100 abspfx = util.canonpath(repo.root, cwd, pat)
1097 if destdirexists:
1101 if destdirexists:
1098 striplen = len(os.path.split(abspfx)[0])
1102 striplen = len(os.path.split(abspfx)[0])
1099 else:
1103 else:
1100 striplen = len(abspfx)
1104 striplen = len(abspfx)
1101 if striplen:
1105 if striplen:
1102 striplen += len(os.sep)
1106 striplen += len(os.sep)
1103 res = lambda p: os.path.join(dest, p[striplen:])
1107 res = lambda p: os.path.join(dest, p[striplen:])
1104 elif destdirexists:
1108 elif destdirexists:
1105 res = lambda p: os.path.join(dest, os.path.basename(p))
1109 res = lambda p: os.path.join(dest, os.path.basename(p))
1106 else:
1110 else:
1107 res = lambda p: dest
1111 res = lambda p: dest
1108 return res
1112 return res
1109
1113
1110 def targetpathafterfn(pat, dest, srcs):
1114 def targetpathafterfn(pat, dest, srcs):
1111 if util.patkind(pat, None)[0]:
1115 if util.patkind(pat, None)[0]:
1112 # a mercurial pattern
1116 # a mercurial pattern
1113 res = lambda p: os.path.join(dest, os.path.basename(p))
1117 res = lambda p: os.path.join(dest, os.path.basename(p))
1114 else:
1118 else:
1115 abspfx = util.canonpath(repo.root, cwd, pat)
1119 abspfx = util.canonpath(repo.root, cwd, pat)
1116 if len(abspfx) < len(srcs[0][0]):
1120 if len(abspfx) < len(srcs[0][0]):
1117 # A directory. Either the target path contains the last
1121 # A directory. Either the target path contains the last
1118 # component of the source path or it does not.
1122 # component of the source path or it does not.
1119 def evalpath(striplen):
1123 def evalpath(striplen):
1120 score = 0
1124 score = 0
1121 for s in srcs:
1125 for s in srcs:
1122 t = os.path.join(dest, s[0][striplen:])
1126 t = os.path.join(dest, s[0][striplen:])
1123 if os.path.exists(t):
1127 if os.path.exists(t):
1124 score += 1
1128 score += 1
1125 return score
1129 return score
1126
1130
1127 striplen = len(abspfx)
1131 striplen = len(abspfx)
1128 if striplen:
1132 if striplen:
1129 striplen += len(os.sep)
1133 striplen += len(os.sep)
1130 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1134 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1131 score = evalpath(striplen)
1135 score = evalpath(striplen)
1132 striplen1 = len(os.path.split(abspfx)[0])
1136 striplen1 = len(os.path.split(abspfx)[0])
1133 if striplen1:
1137 if striplen1:
1134 striplen1 += len(os.sep)
1138 striplen1 += len(os.sep)
1135 if evalpath(striplen1) > score:
1139 if evalpath(striplen1) > score:
1136 striplen = striplen1
1140 striplen = striplen1
1137 res = lambda p: os.path.join(dest, p[striplen:])
1141 res = lambda p: os.path.join(dest, p[striplen:])
1138 else:
1142 else:
1139 # a file
1143 # a file
1140 if destdirexists:
1144 if destdirexists:
1141 res = lambda p: os.path.join(dest, os.path.basename(p))
1145 res = lambda p: os.path.join(dest, os.path.basename(p))
1142 else:
1146 else:
1143 res = lambda p: dest
1147 res = lambda p: dest
1144 return res
1148 return res
1145
1149
1146
1150
1147 pats = list(pats)
1151 pats = list(pats)
1148 if not pats:
1152 if not pats:
1149 raise util.Abort(_('no source or destination specified'))
1153 raise util.Abort(_('no source or destination specified'))
1150 if len(pats) == 1:
1154 if len(pats) == 1:
1151 raise util.Abort(_('no destination specified'))
1155 raise util.Abort(_('no destination specified'))
1152 dest = pats.pop()
1156 dest = pats.pop()
1153 destdirexists = os.path.isdir(dest)
1157 destdirexists = os.path.isdir(dest)
1154 if (len(pats) > 1 or util.patkind(pats[0], None)[0]) and not destdirexists:
1158 if (len(pats) > 1 or util.patkind(pats[0], None)[0]) and not destdirexists:
1155 raise util.Abort(_('with multiple sources, destination must be an '
1159 raise util.Abort(_('with multiple sources, destination must be an '
1156 'existing directory'))
1160 'existing directory'))
1157 if opts['after']:
1161 if opts['after']:
1158 tfn = targetpathafterfn
1162 tfn = targetpathafterfn
1159 else:
1163 else:
1160 tfn = targetpathfn
1164 tfn = targetpathfn
1161 copylist = []
1165 copylist = []
1162 for pat in pats:
1166 for pat in pats:
1163 srcs = []
1167 srcs = []
1164 for tag, abssrc, relsrc, exact in walk(repo, [pat], opts):
1168 for tag, abssrc, relsrc, exact in walk(repo, [pat], opts):
1165 origsrc = okaytocopy(abssrc, relsrc, exact)
1169 origsrc = okaytocopy(abssrc, relsrc, exact)
1166 if origsrc:
1170 if origsrc:
1167 srcs.append((origsrc, abssrc, relsrc, exact))
1171 srcs.append((origsrc, abssrc, relsrc, exact))
1168 if not srcs:
1172 if not srcs:
1169 continue
1173 continue
1170 copylist.append((tfn(pat, dest, srcs), srcs))
1174 copylist.append((tfn(pat, dest, srcs), srcs))
1171 if not copylist:
1175 if not copylist:
1172 raise util.Abort(_('no files to copy'))
1176 raise util.Abort(_('no files to copy'))
1173
1177
1174 for targetpath, srcs in copylist:
1178 for targetpath, srcs in copylist:
1175 for origsrc, abssrc, relsrc, exact in srcs:
1179 for origsrc, abssrc, relsrc, exact in srcs:
1176 copy(origsrc, abssrc, relsrc, targetpath(abssrc), exact)
1180 copy(origsrc, abssrc, relsrc, targetpath(abssrc), exact)
1177
1181
1178 if errors:
1182 if errors:
1179 ui.warn(_('(consider using --after)\n'))
1183 ui.warn(_('(consider using --after)\n'))
1180 return errors, copied
1184 return errors, copied
1181
1185
1182 def copy(ui, repo, *pats, **opts):
1186 def copy(ui, repo, *pats, **opts):
1183 """mark files as copied for the next commit
1187 """mark files as copied for the next commit
1184
1188
1185 Mark dest as having copies of source files. If dest is a
1189 Mark dest as having copies of source files. If dest is a
1186 directory, copies are put in that directory. If dest is a file,
1190 directory, copies are put in that directory. If dest is a file,
1187 there can only be one source.
1191 there can only be one source.
1188
1192
1189 By default, this command copies the contents of files as they
1193 By default, this command copies the contents of files as they
1190 stand in the working directory. If invoked with --after, the
1194 stand in the working directory. If invoked with --after, the
1191 operation is recorded, but no copying is performed.
1195 operation is recorded, but no copying is performed.
1192
1196
1193 This command takes effect in the next commit.
1197 This command takes effect in the next commit.
1194
1198
1195 NOTE: This command should be treated as experimental. While it
1199 NOTE: This command should be treated as experimental. While it
1196 should properly record copied files, this information is not yet
1200 should properly record copied files, this information is not yet
1197 fully used by merge, nor fully reported by log.
1201 fully used by merge, nor fully reported by log.
1198 """
1202 """
1199 wlock = repo.wlock(0)
1203 wlock = repo.wlock(0)
1200 errs, copied = docopy(ui, repo, pats, opts, wlock)
1204 errs, copied = docopy(ui, repo, pats, opts, wlock)
1201 return errs
1205 return errs
1202
1206
1203 def debugancestor(ui, index, rev1, rev2):
1207 def debugancestor(ui, index, rev1, rev2):
1204 """find the ancestor revision of two revisions in a given index"""
1208 """find the ancestor revision of two revisions in a given index"""
1205 r = revlog.revlog(util.opener(os.getcwd(), audit=False), index, "", 0)
1209 r = revlog.revlog(util.opener(os.getcwd(), audit=False), index, "", 0)
1206 a = r.ancestor(r.lookup(rev1), r.lookup(rev2))
1210 a = r.ancestor(r.lookup(rev1), r.lookup(rev2))
1207 ui.write("%d:%s\n" % (r.rev(a), hex(a)))
1211 ui.write("%d:%s\n" % (r.rev(a), hex(a)))
1208
1212
1209 def debugcomplete(ui, cmd='', **opts):
1213 def debugcomplete(ui, cmd='', **opts):
1210 """returns the completion list associated with the given command"""
1214 """returns the completion list associated with the given command"""
1211
1215
1212 if opts['options']:
1216 if opts['options']:
1213 options = []
1217 options = []
1214 otables = [globalopts]
1218 otables = [globalopts]
1215 if cmd:
1219 if cmd:
1216 aliases, entry = findcmd(cmd)
1220 aliases, entry = findcmd(cmd)
1217 otables.append(entry[1])
1221 otables.append(entry[1])
1218 for t in otables:
1222 for t in otables:
1219 for o in t:
1223 for o in t:
1220 if o[0]:
1224 if o[0]:
1221 options.append('-%s' % o[0])
1225 options.append('-%s' % o[0])
1222 options.append('--%s' % o[1])
1226 options.append('--%s' % o[1])
1223 ui.write("%s\n" % "\n".join(options))
1227 ui.write("%s\n" % "\n".join(options))
1224 return
1228 return
1225
1229
1226 clist = findpossible(cmd).keys()
1230 clist = findpossible(cmd).keys()
1227 clist.sort()
1231 clist.sort()
1228 ui.write("%s\n" % "\n".join(clist))
1232 ui.write("%s\n" % "\n".join(clist))
1229
1233
1230 def debugrebuildstate(ui, repo, rev=None):
1234 def debugrebuildstate(ui, repo, rev=None):
1231 """rebuild the dirstate as it would look like for the given revision"""
1235 """rebuild the dirstate as it would look like for the given revision"""
1232 if not rev:
1236 if not rev:
1233 rev = repo.changelog.tip()
1237 rev = repo.changelog.tip()
1234 else:
1238 else:
1235 rev = repo.lookup(rev)
1239 rev = repo.lookup(rev)
1236 change = repo.changelog.read(rev)
1240 change = repo.changelog.read(rev)
1237 n = change[0]
1241 n = change[0]
1238 files = repo.manifest.readflags(n)
1242 files = repo.manifest.readflags(n)
1239 wlock = repo.wlock()
1243 wlock = repo.wlock()
1240 repo.dirstate.rebuild(rev, files.iteritems())
1244 repo.dirstate.rebuild(rev, files.iteritems())
1241
1245
1242 def debugcheckstate(ui, repo):
1246 def debugcheckstate(ui, repo):
1243 """validate the correctness of the current dirstate"""
1247 """validate the correctness of the current dirstate"""
1244 parent1, parent2 = repo.dirstate.parents()
1248 parent1, parent2 = repo.dirstate.parents()
1245 repo.dirstate.read()
1249 repo.dirstate.read()
1246 dc = repo.dirstate.map
1250 dc = repo.dirstate.map
1247 keys = dc.keys()
1251 keys = dc.keys()
1248 keys.sort()
1252 keys.sort()
1249 m1n = repo.changelog.read(parent1)[0]
1253 m1n = repo.changelog.read(parent1)[0]
1250 m2n = repo.changelog.read(parent2)[0]
1254 m2n = repo.changelog.read(parent2)[0]
1251 m1 = repo.manifest.read(m1n)
1255 m1 = repo.manifest.read(m1n)
1252 m2 = repo.manifest.read(m2n)
1256 m2 = repo.manifest.read(m2n)
1253 errors = 0
1257 errors = 0
1254 for f in dc:
1258 for f in dc:
1255 state = repo.dirstate.state(f)
1259 state = repo.dirstate.state(f)
1256 if state in "nr" and f not in m1:
1260 if state in "nr" and f not in m1:
1257 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
1261 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
1258 errors += 1
1262 errors += 1
1259 if state in "a" and f in m1:
1263 if state in "a" and f in m1:
1260 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
1264 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
1261 errors += 1
1265 errors += 1
1262 if state in "m" and f not in m1 and f not in m2:
1266 if state in "m" and f not in m1 and f not in m2:
1263 ui.warn(_("%s in state %s, but not in either manifest\n") %
1267 ui.warn(_("%s in state %s, but not in either manifest\n") %
1264 (f, state))
1268 (f, state))
1265 errors += 1
1269 errors += 1
1266 for f in m1:
1270 for f in m1:
1267 state = repo.dirstate.state(f)
1271 state = repo.dirstate.state(f)
1268 if state not in "nrm":
1272 if state not in "nrm":
1269 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
1273 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
1270 errors += 1
1274 errors += 1
1271 if errors:
1275 if errors:
1272 error = _(".hg/dirstate inconsistent with current parent's manifest")
1276 error = _(".hg/dirstate inconsistent with current parent's manifest")
1273 raise util.Abort(error)
1277 raise util.Abort(error)
1274
1278
1275 def debugconfig(ui, repo, *values):
1279 def debugconfig(ui, repo, *values):
1276 """show combined config settings from all hgrc files
1280 """show combined config settings from all hgrc files
1277
1281
1278 With no args, print names and values of all config items.
1282 With no args, print names and values of all config items.
1279
1283
1280 With one arg of the form section.name, print just the value of
1284 With one arg of the form section.name, print just the value of
1281 that config item.
1285 that config item.
1282
1286
1283 With multiple args, print names and values of all config items
1287 With multiple args, print names and values of all config items
1284 with matching section names."""
1288 with matching section names."""
1285
1289
1286 if values:
1290 if values:
1287 if len([v for v in values if '.' in v]) > 1:
1291 if len([v for v in values if '.' in v]) > 1:
1288 raise util.Abort(_('only one config item permitted'))
1292 raise util.Abort(_('only one config item permitted'))
1289 for section, name, value in ui.walkconfig():
1293 for section, name, value in ui.walkconfig():
1290 sectname = section + '.' + name
1294 sectname = section + '.' + name
1291 if values:
1295 if values:
1292 for v in values:
1296 for v in values:
1293 if v == section:
1297 if v == section:
1294 ui.write('%s=%s\n' % (sectname, value))
1298 ui.write('%s=%s\n' % (sectname, value))
1295 elif v == sectname:
1299 elif v == sectname:
1296 ui.write(value, '\n')
1300 ui.write(value, '\n')
1297 else:
1301 else:
1298 ui.write('%s=%s\n' % (sectname, value))
1302 ui.write('%s=%s\n' % (sectname, value))
1299
1303
1300 def debugsetparents(ui, repo, rev1, rev2=None):
1304 def debugsetparents(ui, repo, rev1, rev2=None):
1301 """manually set the parents of the current working directory
1305 """manually set the parents of the current working directory
1302
1306
1303 This is useful for writing repository conversion tools, but should
1307 This is useful for writing repository conversion tools, but should
1304 be used with care.
1308 be used with care.
1305 """
1309 """
1306
1310
1307 if not rev2:
1311 if not rev2:
1308 rev2 = hex(nullid)
1312 rev2 = hex(nullid)
1309
1313
1310 repo.dirstate.setparents(repo.lookup(rev1), repo.lookup(rev2))
1314 repo.dirstate.setparents(repo.lookup(rev1), repo.lookup(rev2))
1311
1315
1312 def debugstate(ui, repo):
1316 def debugstate(ui, repo):
1313 """show the contents of the current dirstate"""
1317 """show the contents of the current dirstate"""
1314 repo.dirstate.read()
1318 repo.dirstate.read()
1315 dc = repo.dirstate.map
1319 dc = repo.dirstate.map
1316 keys = dc.keys()
1320 keys = dc.keys()
1317 keys.sort()
1321 keys.sort()
1318 for file_ in keys:
1322 for file_ in keys:
1319 ui.write("%c %3o %10d %s %s\n"
1323 ui.write("%c %3o %10d %s %s\n"
1320 % (dc[file_][0], dc[file_][1] & 0777, dc[file_][2],
1324 % (dc[file_][0], dc[file_][1] & 0777, dc[file_][2],
1321 time.strftime("%x %X",
1325 time.strftime("%x %X",
1322 time.localtime(dc[file_][3])), file_))
1326 time.localtime(dc[file_][3])), file_))
1323 for f in repo.dirstate.copies:
1327 for f in repo.dirstate.copies:
1324 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copies[f], f))
1328 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copies[f], f))
1325
1329
1326 def debugdata(ui, file_, rev):
1330 def debugdata(ui, file_, rev):
1327 """dump the contents of an data file revision"""
1331 """dump the contents of an data file revision"""
1328 r = revlog.revlog(util.opener(os.getcwd(), audit=False),
1332 r = revlog.revlog(util.opener(os.getcwd(), audit=False),
1329 file_[:-2] + ".i", file_, 0)
1333 file_[:-2] + ".i", file_, 0)
1330 try:
1334 try:
1331 ui.write(r.revision(r.lookup(rev)))
1335 ui.write(r.revision(r.lookup(rev)))
1332 except KeyError:
1336 except KeyError:
1333 raise util.Abort(_('invalid revision identifier %s'), rev)
1337 raise util.Abort(_('invalid revision identifier %s'), rev)
1334
1338
1335 def debugindex(ui, file_):
1339 def debugindex(ui, file_):
1336 """dump the contents of an index file"""
1340 """dump the contents of an index file"""
1337 r = revlog.revlog(util.opener(os.getcwd(), audit=False), file_, "", 0)
1341 r = revlog.revlog(util.opener(os.getcwd(), audit=False), file_, "", 0)
1338 ui.write(" rev offset length base linkrev" +
1342 ui.write(" rev offset length base linkrev" +
1339 " nodeid p1 p2\n")
1343 " nodeid p1 p2\n")
1340 for i in range(r.count()):
1344 for i in range(r.count()):
1341 node = r.node(i)
1345 node = r.node(i)
1342 pp = r.parents(node)
1346 pp = r.parents(node)
1343 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
1347 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
1344 i, r.start(i), r.length(i), r.base(i), r.linkrev(node),
1348 i, r.start(i), r.length(i), r.base(i), r.linkrev(node),
1345 short(node), short(pp[0]), short(pp[1])))
1349 short(node), short(pp[0]), short(pp[1])))
1346
1350
1347 def debugindexdot(ui, file_):
1351 def debugindexdot(ui, file_):
1348 """dump an index DAG as a .dot file"""
1352 """dump an index DAG as a .dot file"""
1349 r = revlog.revlog(util.opener(os.getcwd(), audit=False), file_, "", 0)
1353 r = revlog.revlog(util.opener(os.getcwd(), audit=False), file_, "", 0)
1350 ui.write("digraph G {\n")
1354 ui.write("digraph G {\n")
1351 for i in range(r.count()):
1355 for i in range(r.count()):
1352 node = r.node(i)
1356 node = r.node(i)
1353 pp = r.parents(node)
1357 pp = r.parents(node)
1354 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
1358 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
1355 if pp[1] != nullid:
1359 if pp[1] != nullid:
1356 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
1360 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
1357 ui.write("}\n")
1361 ui.write("}\n")
1358
1362
1359 def debugrename(ui, repo, file, rev=None):
1363 def debugrename(ui, repo, file, rev=None):
1360 """dump rename information"""
1364 """dump rename information"""
1361 r = repo.file(relpath(repo, [file])[0])
1365 r = repo.file(relpath(repo, [file])[0])
1362 if rev:
1366 if rev:
1363 try:
1367 try:
1364 # assume all revision numbers are for changesets
1368 # assume all revision numbers are for changesets
1365 n = repo.lookup(rev)
1369 n = repo.lookup(rev)
1366 change = repo.changelog.read(n)
1370 change = repo.changelog.read(n)
1367 m = repo.manifest.read(change[0])
1371 m = repo.manifest.read(change[0])
1368 n = m[relpath(repo, [file])[0]]
1372 n = m[relpath(repo, [file])[0]]
1369 except (hg.RepoError, KeyError):
1373 except (hg.RepoError, KeyError):
1370 n = r.lookup(rev)
1374 n = r.lookup(rev)
1371 else:
1375 else:
1372 n = r.tip()
1376 n = r.tip()
1373 m = r.renamed(n)
1377 m = r.renamed(n)
1374 if m:
1378 if m:
1375 ui.write(_("renamed from %s:%s\n") % (m[0], hex(m[1])))
1379 ui.write(_("renamed from %s:%s\n") % (m[0], hex(m[1])))
1376 else:
1380 else:
1377 ui.write(_("not renamed\n"))
1381 ui.write(_("not renamed\n"))
1378
1382
1379 def debugwalk(ui, repo, *pats, **opts):
1383 def debugwalk(ui, repo, *pats, **opts):
1380 """show how files match on given patterns"""
1384 """show how files match on given patterns"""
1381 items = list(walk(repo, pats, opts))
1385 items = list(walk(repo, pats, opts))
1382 if not items:
1386 if not items:
1383 return
1387 return
1384 fmt = '%%s %%-%ds %%-%ds %%s' % (
1388 fmt = '%%s %%-%ds %%-%ds %%s' % (
1385 max([len(abs) for (src, abs, rel, exact) in items]),
1389 max([len(abs) for (src, abs, rel, exact) in items]),
1386 max([len(rel) for (src, abs, rel, exact) in items]))
1390 max([len(rel) for (src, abs, rel, exact) in items]))
1387 for src, abs, rel, exact in items:
1391 for src, abs, rel, exact in items:
1388 line = fmt % (src, abs, rel, exact and 'exact' or '')
1392 line = fmt % (src, abs, rel, exact and 'exact' or '')
1389 ui.write("%s\n" % line.rstrip())
1393 ui.write("%s\n" % line.rstrip())
1390
1394
1391 def diff(ui, repo, *pats, **opts):
1395 def diff(ui, repo, *pats, **opts):
1392 """diff repository (or selected files)
1396 """diff repository (or selected files)
1393
1397
1394 Show differences between revisions for the specified files.
1398 Show differences between revisions for the specified files.
1395
1399
1396 Differences between files are shown using the unified diff format.
1400 Differences between files are shown using the unified diff format.
1397
1401
1398 When two revision arguments are given, then changes are shown
1402 When two revision arguments are given, then changes are shown
1399 between those revisions. If only one revision is specified then
1403 between those revisions. If only one revision is specified then
1400 that revision is compared to the working directory, and, when no
1404 that revision is compared to the working directory, and, when no
1401 revisions are specified, the working directory files are compared
1405 revisions are specified, the working directory files are compared
1402 to its parent.
1406 to its parent.
1403
1407
1404 Without the -a option, diff will avoid generating diffs of files
1408 Without the -a option, diff will avoid generating diffs of files
1405 it detects as binary. With -a, diff will generate a diff anyway,
1409 it detects as binary. With -a, diff will generate a diff anyway,
1406 probably with undesirable results.
1410 probably with undesirable results.
1407 """
1411 """
1408 node1, node2 = revpair(ui, repo, opts['rev'])
1412 node1, node2 = revpair(ui, repo, opts['rev'])
1409
1413
1410 fns, matchfn, anypats = matchpats(repo, pats, opts)
1414 fns, matchfn, anypats = matchpats(repo, pats, opts)
1411
1415
1412 dodiff(sys.stdout, ui, repo, node1, node2, fns, match=matchfn,
1416 dodiff(sys.stdout, ui, repo, node1, node2, fns, match=matchfn,
1413 text=opts['text'], opts=opts)
1417 text=opts['text'], opts=opts)
1414
1418
1415 def doexport(ui, repo, changeset, seqno, total, revwidth, opts):
1419 def doexport(ui, repo, changeset, seqno, total, revwidth, opts):
1416 node = repo.lookup(changeset)
1420 node = repo.lookup(changeset)
1417 parents = [p for p in repo.changelog.parents(node) if p != nullid]
1421 parents = [p for p in repo.changelog.parents(node) if p != nullid]
1418 if opts['switch_parent']:
1422 if opts['switch_parent']:
1419 parents.reverse()
1423 parents.reverse()
1420 prev = (parents and parents[0]) or nullid
1424 prev = (parents and parents[0]) or nullid
1421 change = repo.changelog.read(node)
1425 change = repo.changelog.read(node)
1422
1426
1423 fp = make_file(repo, opts['output'], node, total=total, seqno=seqno,
1427 fp = make_file(repo, opts['output'], node, total=total, seqno=seqno,
1424 revwidth=revwidth)
1428 revwidth=revwidth)
1425 if fp != sys.stdout:
1429 if fp != sys.stdout:
1426 ui.note("%s\n" % fp.name)
1430 ui.note("%s\n" % fp.name)
1427
1431
1428 fp.write("# HG changeset patch\n")
1432 fp.write("# HG changeset patch\n")
1429 fp.write("# User %s\n" % change[1])
1433 fp.write("# User %s\n" % change[1])
1430 fp.write("# Date %d %d\n" % change[2])
1434 fp.write("# Date %d %d\n" % change[2])
1431 fp.write("# Node ID %s\n" % hex(node))
1435 fp.write("# Node ID %s\n" % hex(node))
1432 fp.write("# Parent %s\n" % hex(prev))
1436 fp.write("# Parent %s\n" % hex(prev))
1433 if len(parents) > 1:
1437 if len(parents) > 1:
1434 fp.write("# Parent %s\n" % hex(parents[1]))
1438 fp.write("# Parent %s\n" % hex(parents[1]))
1435 fp.write(change[4].rstrip())
1439 fp.write(change[4].rstrip())
1436 fp.write("\n\n")
1440 fp.write("\n\n")
1437
1441
1438 dodiff(fp, ui, repo, prev, node, text=opts['text'])
1442 dodiff(fp, ui, repo, prev, node, text=opts['text'])
1439 if fp != sys.stdout:
1443 if fp != sys.stdout:
1440 fp.close()
1444 fp.close()
1441
1445
1442 def export(ui, repo, *changesets, **opts):
1446 def export(ui, repo, *changesets, **opts):
1443 """dump the header and diffs for one or more changesets
1447 """dump the header and diffs for one or more changesets
1444
1448
1445 Print the changeset header and diffs for one or more revisions.
1449 Print the changeset header and diffs for one or more revisions.
1446
1450
1447 The information shown in the changeset header is: author,
1451 The information shown in the changeset header is: author,
1448 changeset hash, parent and commit comment.
1452 changeset hash, parent and commit comment.
1449
1453
1450 Output may be to a file, in which case the name of the file is
1454 Output may be to a file, in which case the name of the file is
1451 given using a format string. The formatting rules are as follows:
1455 given using a format string. The formatting rules are as follows:
1452
1456
1453 %% literal "%" character
1457 %% literal "%" character
1454 %H changeset hash (40 bytes of hexadecimal)
1458 %H changeset hash (40 bytes of hexadecimal)
1455 %N number of patches being generated
1459 %N number of patches being generated
1456 %R changeset revision number
1460 %R changeset revision number
1457 %b basename of the exporting repository
1461 %b basename of the exporting repository
1458 %h short-form changeset hash (12 bytes of hexadecimal)
1462 %h short-form changeset hash (12 bytes of hexadecimal)
1459 %n zero-padded sequence number, starting at 1
1463 %n zero-padded sequence number, starting at 1
1460 %r zero-padded changeset revision number
1464 %r zero-padded changeset revision number
1461
1465
1462 Without the -a option, export will avoid generating diffs of files
1466 Without the -a option, export will avoid generating diffs of files
1463 it detects as binary. With -a, export will generate a diff anyway,
1467 it detects as binary. With -a, export will generate a diff anyway,
1464 probably with undesirable results.
1468 probably with undesirable results.
1465
1469
1466 With the --switch-parent option, the diff will be against the second
1470 With the --switch-parent option, the diff will be against the second
1467 parent. It can be useful to review a merge.
1471 parent. It can be useful to review a merge.
1468 """
1472 """
1469 if not changesets:
1473 if not changesets:
1470 raise util.Abort(_("export requires at least one changeset"))
1474 raise util.Abort(_("export requires at least one changeset"))
1471 seqno = 0
1475 seqno = 0
1472 revs = list(revrange(ui, repo, changesets))
1476 revs = list(revrange(ui, repo, changesets))
1473 total = len(revs)
1477 total = len(revs)
1474 revwidth = max(map(len, revs))
1478 revwidth = max(map(len, revs))
1475 msg = len(revs) > 1 and _("Exporting patches:\n") or _("Exporting patch:\n")
1479 msg = len(revs) > 1 and _("Exporting patches:\n") or _("Exporting patch:\n")
1476 ui.note(msg)
1480 ui.note(msg)
1477 for cset in revs:
1481 for cset in revs:
1478 seqno += 1
1482 seqno += 1
1479 doexport(ui, repo, cset, seqno, total, revwidth, opts)
1483 doexport(ui, repo, cset, seqno, total, revwidth, opts)
1480
1484
1481 def forget(ui, repo, *pats, **opts):
1485 def forget(ui, repo, *pats, **opts):
1482 """don't add the specified files on the next commit (DEPRECATED)
1486 """don't add the specified files on the next commit (DEPRECATED)
1483
1487
1484 (DEPRECATED)
1488 (DEPRECATED)
1485 Undo an 'hg add' scheduled for the next commit.
1489 Undo an 'hg add' scheduled for the next commit.
1486
1490
1487 This command is now deprecated and will be removed in a future
1491 This command is now deprecated and will be removed in a future
1488 release. Please use revert instead.
1492 release. Please use revert instead.
1489 """
1493 """
1490 ui.warn(_("(the forget command is deprecated; use revert instead)\n"))
1494 ui.warn(_("(the forget command is deprecated; use revert instead)\n"))
1491 forget = []
1495 forget = []
1492 for src, abs, rel, exact in walk(repo, pats, opts):
1496 for src, abs, rel, exact in walk(repo, pats, opts):
1493 if repo.dirstate.state(abs) == 'a':
1497 if repo.dirstate.state(abs) == 'a':
1494 forget.append(abs)
1498 forget.append(abs)
1495 if ui.verbose or not exact:
1499 if ui.verbose or not exact:
1496 ui.status(_('forgetting %s\n') % ((pats and rel) or abs))
1500 ui.status(_('forgetting %s\n') % ((pats and rel) or abs))
1497 repo.forget(forget)
1501 repo.forget(forget)
1498
1502
1499 def grep(ui, repo, pattern, *pats, **opts):
1503 def grep(ui, repo, pattern, *pats, **opts):
1500 """search for a pattern in specified files and revisions
1504 """search for a pattern in specified files and revisions
1501
1505
1502 Search revisions of files for a regular expression.
1506 Search revisions of files for a regular expression.
1503
1507
1504 This command behaves differently than Unix grep. It only accepts
1508 This command behaves differently than Unix grep. It only accepts
1505 Python/Perl regexps. It searches repository history, not the
1509 Python/Perl regexps. It searches repository history, not the
1506 working directory. It always prints the revision number in which
1510 working directory. It always prints the revision number in which
1507 a match appears.
1511 a match appears.
1508
1512
1509 By default, grep only prints output for the first revision of a
1513 By default, grep only prints output for the first revision of a
1510 file in which it finds a match. To get it to print every revision
1514 file in which it finds a match. To get it to print every revision
1511 that contains a change in match status ("-" for a match that
1515 that contains a change in match status ("-" for a match that
1512 becomes a non-match, or "+" for a non-match that becomes a match),
1516 becomes a non-match, or "+" for a non-match that becomes a match),
1513 use the --all flag.
1517 use the --all flag.
1514 """
1518 """
1515 reflags = 0
1519 reflags = 0
1516 if opts['ignore_case']:
1520 if opts['ignore_case']:
1517 reflags |= re.I
1521 reflags |= re.I
1518 regexp = re.compile(pattern, reflags)
1522 regexp = re.compile(pattern, reflags)
1519 sep, eol = ':', '\n'
1523 sep, eol = ':', '\n'
1520 if opts['print0']:
1524 if opts['print0']:
1521 sep = eol = '\0'
1525 sep = eol = '\0'
1522
1526
1523 fcache = {}
1527 fcache = {}
1524 def getfile(fn):
1528 def getfile(fn):
1525 if fn not in fcache:
1529 if fn not in fcache:
1526 fcache[fn] = repo.file(fn)
1530 fcache[fn] = repo.file(fn)
1527 return fcache[fn]
1531 return fcache[fn]
1528
1532
1529 def matchlines(body):
1533 def matchlines(body):
1530 begin = 0
1534 begin = 0
1531 linenum = 0
1535 linenum = 0
1532 while True:
1536 while True:
1533 match = regexp.search(body, begin)
1537 match = regexp.search(body, begin)
1534 if not match:
1538 if not match:
1535 break
1539 break
1536 mstart, mend = match.span()
1540 mstart, mend = match.span()
1537 linenum += body.count('\n', begin, mstart) + 1
1541 linenum += body.count('\n', begin, mstart) + 1
1538 lstart = body.rfind('\n', begin, mstart) + 1 or begin
1542 lstart = body.rfind('\n', begin, mstart) + 1 or begin
1539 lend = body.find('\n', mend)
1543 lend = body.find('\n', mend)
1540 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
1544 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
1541 begin = lend + 1
1545 begin = lend + 1
1542
1546
1543 class linestate(object):
1547 class linestate(object):
1544 def __init__(self, line, linenum, colstart, colend):
1548 def __init__(self, line, linenum, colstart, colend):
1545 self.line = line
1549 self.line = line
1546 self.linenum = linenum
1550 self.linenum = linenum
1547 self.colstart = colstart
1551 self.colstart = colstart
1548 self.colend = colend
1552 self.colend = colend
1549 def __eq__(self, other):
1553 def __eq__(self, other):
1550 return self.line == other.line
1554 return self.line == other.line
1551 def __hash__(self):
1555 def __hash__(self):
1552 return hash(self.line)
1556 return hash(self.line)
1553
1557
1554 matches = {}
1558 matches = {}
1555 def grepbody(fn, rev, body):
1559 def grepbody(fn, rev, body):
1556 matches[rev].setdefault(fn, {})
1560 matches[rev].setdefault(fn, {})
1557 m = matches[rev][fn]
1561 m = matches[rev][fn]
1558 for lnum, cstart, cend, line in matchlines(body):
1562 for lnum, cstart, cend, line in matchlines(body):
1559 s = linestate(line, lnum, cstart, cend)
1563 s = linestate(line, lnum, cstart, cend)
1560 m[s] = s
1564 m[s] = s
1561
1565
1562 # FIXME: prev isn't used, why ?
1566 # FIXME: prev isn't used, why ?
1563 prev = {}
1567 prev = {}
1564 ucache = {}
1568 ucache = {}
1565 def display(fn, rev, states, prevstates):
1569 def display(fn, rev, states, prevstates):
1566 diff = list(sets.Set(states).symmetric_difference(sets.Set(prevstates)))
1570 diff = list(sets.Set(states).symmetric_difference(sets.Set(prevstates)))
1567 diff.sort(lambda x, y: cmp(x.linenum, y.linenum))
1571 diff.sort(lambda x, y: cmp(x.linenum, y.linenum))
1568 counts = {'-': 0, '+': 0}
1572 counts = {'-': 0, '+': 0}
1569 filerevmatches = {}
1573 filerevmatches = {}
1570 for l in diff:
1574 for l in diff:
1571 if incrementing or not opts['all']:
1575 if incrementing or not opts['all']:
1572 change = ((l in prevstates) and '-') or '+'
1576 change = ((l in prevstates) and '-') or '+'
1573 r = rev
1577 r = rev
1574 else:
1578 else:
1575 change = ((l in states) and '-') or '+'
1579 change = ((l in states) and '-') or '+'
1576 r = prev[fn]
1580 r = prev[fn]
1577 cols = [fn, str(rev)]
1581 cols = [fn, str(rev)]
1578 if opts['line_number']:
1582 if opts['line_number']:
1579 cols.append(str(l.linenum))
1583 cols.append(str(l.linenum))
1580 if opts['all']:
1584 if opts['all']:
1581 cols.append(change)
1585 cols.append(change)
1582 if opts['user']:
1586 if opts['user']:
1583 cols.append(trimuser(ui, getchange(rev)[1], rev,
1587 cols.append(trimuser(ui, getchange(rev)[1], rev,
1584 ucache))
1588 ucache))
1585 if opts['files_with_matches']:
1589 if opts['files_with_matches']:
1586 c = (fn, rev)
1590 c = (fn, rev)
1587 if c in filerevmatches:
1591 if c in filerevmatches:
1588 continue
1592 continue
1589 filerevmatches[c] = 1
1593 filerevmatches[c] = 1
1590 else:
1594 else:
1591 cols.append(l.line)
1595 cols.append(l.line)
1592 ui.write(sep.join(cols), eol)
1596 ui.write(sep.join(cols), eol)
1593 counts[change] += 1
1597 counts[change] += 1
1594 return counts['+'], counts['-']
1598 return counts['+'], counts['-']
1595
1599
1596 fstate = {}
1600 fstate = {}
1597 skip = {}
1601 skip = {}
1598 changeiter, getchange, matchfn = walkchangerevs(ui, repo, pats, opts)
1602 changeiter, getchange, matchfn = walkchangerevs(ui, repo, pats, opts)
1599 count = 0
1603 count = 0
1600 incrementing = False
1604 incrementing = False
1601 for st, rev, fns in changeiter:
1605 for st, rev, fns in changeiter:
1602 if st == 'window':
1606 if st == 'window':
1603 incrementing = rev
1607 incrementing = rev
1604 matches.clear()
1608 matches.clear()
1605 elif st == 'add':
1609 elif st == 'add':
1606 change = repo.changelog.read(repo.lookup(str(rev)))
1610 change = repo.changelog.read(repo.lookup(str(rev)))
1607 mf = repo.manifest.read(change[0])
1611 mf = repo.manifest.read(change[0])
1608 matches[rev] = {}
1612 matches[rev] = {}
1609 for fn in fns:
1613 for fn in fns:
1610 if fn in skip:
1614 if fn in skip:
1611 continue
1615 continue
1612 fstate.setdefault(fn, {})
1616 fstate.setdefault(fn, {})
1613 try:
1617 try:
1614 grepbody(fn, rev, getfile(fn).read(mf[fn]))
1618 grepbody(fn, rev, getfile(fn).read(mf[fn]))
1615 except KeyError:
1619 except KeyError:
1616 pass
1620 pass
1617 elif st == 'iter':
1621 elif st == 'iter':
1618 states = matches[rev].items()
1622 states = matches[rev].items()
1619 states.sort()
1623 states.sort()
1620 for fn, m in states:
1624 for fn, m in states:
1621 if fn in skip:
1625 if fn in skip:
1622 continue
1626 continue
1623 if incrementing or not opts['all'] or fstate[fn]:
1627 if incrementing or not opts['all'] or fstate[fn]:
1624 pos, neg = display(fn, rev, m, fstate[fn])
1628 pos, neg = display(fn, rev, m, fstate[fn])
1625 count += pos + neg
1629 count += pos + neg
1626 if pos and not opts['all']:
1630 if pos and not opts['all']:
1627 skip[fn] = True
1631 skip[fn] = True
1628 fstate[fn] = m
1632 fstate[fn] = m
1629 prev[fn] = rev
1633 prev[fn] = rev
1630
1634
1631 if not incrementing:
1635 if not incrementing:
1632 fstate = fstate.items()
1636 fstate = fstate.items()
1633 fstate.sort()
1637 fstate.sort()
1634 for fn, state in fstate:
1638 for fn, state in fstate:
1635 if fn in skip:
1639 if fn in skip:
1636 continue
1640 continue
1637 display(fn, rev, {}, state)
1641 display(fn, rev, {}, state)
1638 return (count == 0 and 1) or 0
1642 return (count == 0 and 1) or 0
1639
1643
1640 def heads(ui, repo, **opts):
1644 def heads(ui, repo, **opts):
1641 """show current repository heads
1645 """show current repository heads
1642
1646
1643 Show all repository head changesets.
1647 Show all repository head changesets.
1644
1648
1645 Repository "heads" are changesets that don't have children
1649 Repository "heads" are changesets that don't have children
1646 changesets. They are where development generally takes place and
1650 changesets. They are where development generally takes place and
1647 are the usual targets for update and merge operations.
1651 are the usual targets for update and merge operations.
1648 """
1652 """
1649 if opts['rev']:
1653 if opts['rev']:
1650 heads = repo.heads(repo.lookup(opts['rev']))
1654 heads = repo.heads(repo.lookup(opts['rev']))
1651 else:
1655 else:
1652 heads = repo.heads()
1656 heads = repo.heads()
1653 br = None
1657 br = None
1654 if opts['branches']:
1658 if opts['branches']:
1655 br = repo.branchlookup(heads)
1659 br = repo.branchlookup(heads)
1656 displayer = show_changeset(ui, repo, opts)
1660 displayer = show_changeset(ui, repo, opts)
1657 for n in heads:
1661 for n in heads:
1658 displayer.show(changenode=n, brinfo=br)
1662 displayer.show(changenode=n, brinfo=br)
1659
1663
1660 def identify(ui, repo):
1664 def identify(ui, repo):
1661 """print information about the working copy
1665 """print information about the working copy
1662
1666
1663 Print a short summary of the current state of the repo.
1667 Print a short summary of the current state of the repo.
1664
1668
1665 This summary identifies the repository state using one or two parent
1669 This summary identifies the repository state using one or two parent
1666 hash identifiers, followed by a "+" if there are uncommitted changes
1670 hash identifiers, followed by a "+" if there are uncommitted changes
1667 in the working directory, followed by a list of tags for this revision.
1671 in the working directory, followed by a list of tags for this revision.
1668 """
1672 """
1669 parents = [p for p in repo.dirstate.parents() if p != nullid]
1673 parents = [p for p in repo.dirstate.parents() if p != nullid]
1670 if not parents:
1674 if not parents:
1671 ui.write(_("unknown\n"))
1675 ui.write(_("unknown\n"))
1672 return
1676 return
1673
1677
1674 hexfunc = ui.verbose and hex or short
1678 hexfunc = ui.verbose and hex or short
1675 modified, added, removed, deleted, unknown = repo.changes()
1679 modified, added, removed, deleted, unknown = repo.changes()
1676 output = ["%s%s" %
1680 output = ["%s%s" %
1677 ('+'.join([hexfunc(parent) for parent in parents]),
1681 ('+'.join([hexfunc(parent) for parent in parents]),
1678 (modified or added or removed or deleted) and "+" or "")]
1682 (modified or added or removed or deleted) and "+" or "")]
1679
1683
1680 if not ui.quiet:
1684 if not ui.quiet:
1681 # multiple tags for a single parent separated by '/'
1685 # multiple tags for a single parent separated by '/'
1682 parenttags = ['/'.join(tags)
1686 parenttags = ['/'.join(tags)
1683 for tags in map(repo.nodetags, parents) if tags]
1687 for tags in map(repo.nodetags, parents) if tags]
1684 # tags for multiple parents separated by ' + '
1688 # tags for multiple parents separated by ' + '
1685 if parenttags:
1689 if parenttags:
1686 output.append(' + '.join(parenttags))
1690 output.append(' + '.join(parenttags))
1687
1691
1688 ui.write("%s\n" % ' '.join(output))
1692 ui.write("%s\n" % ' '.join(output))
1689
1693
1690 def import_(ui, repo, patch1, *patches, **opts):
1694 def import_(ui, repo, patch1, *patches, **opts):
1691 """import an ordered set of patches
1695 """import an ordered set of patches
1692
1696
1693 Import a list of patches and commit them individually.
1697 Import a list of patches and commit them individually.
1694
1698
1695 If there are outstanding changes in the working directory, import
1699 If there are outstanding changes in the working directory, import
1696 will abort unless given the -f flag.
1700 will abort unless given the -f flag.
1697
1701
1698 You can import a patch straight from a mail message. Even patches
1702 You can import a patch straight from a mail message. Even patches
1699 as attachments work (body part must be type text/plain or
1703 as attachments work (body part must be type text/plain or
1700 text/x-patch to be used). From and Subject headers of email
1704 text/x-patch to be used). From and Subject headers of email
1701 message are used as default committer and commit message. All
1705 message are used as default committer and commit message. All
1702 text/plain body parts before first diff are added to commit
1706 text/plain body parts before first diff are added to commit
1703 message.
1707 message.
1704
1708
1705 If imported patch was generated by hg export, user and description
1709 If imported patch was generated by hg export, user and description
1706 from patch override values from message headers and body. Values
1710 from patch override values from message headers and body. Values
1707 given on command line with -m and -u override these.
1711 given on command line with -m and -u override these.
1708
1712
1709 To read a patch from standard input, use patch name "-".
1713 To read a patch from standard input, use patch name "-".
1710 """
1714 """
1711 patches = (patch1,) + patches
1715 patches = (patch1,) + patches
1712
1716
1713 if not opts['force']:
1717 if not opts['force']:
1714 bail_if_changed(repo)
1718 bail_if_changed(repo)
1715
1719
1716 d = opts["base"]
1720 d = opts["base"]
1717 strip = opts["strip"]
1721 strip = opts["strip"]
1718
1722
1719 mailre = re.compile(r'(?:From |[\w-]+:)')
1723 mailre = re.compile(r'(?:From |[\w-]+:)')
1720
1724
1721 # attempt to detect the start of a patch
1725 # attempt to detect the start of a patch
1722 # (this heuristic is borrowed from quilt)
1726 # (this heuristic is borrowed from quilt)
1723 diffre = re.compile(r'^(?:Index:[ \t]|diff[ \t]|RCS file: |' +
1727 diffre = re.compile(r'^(?:Index:[ \t]|diff[ \t]|RCS file: |' +
1724 'retrieving revision [0-9]+(\.[0-9]+)*$|' +
1728 'retrieving revision [0-9]+(\.[0-9]+)*$|' +
1725 '(---|\*\*\*)[ \t])', re.MULTILINE)
1729 '(---|\*\*\*)[ \t])', re.MULTILINE)
1726
1730
1727 for patch in patches:
1731 for patch in patches:
1728 pf = os.path.join(d, patch)
1732 pf = os.path.join(d, patch)
1729
1733
1730 message = None
1734 message = None
1731 user = None
1735 user = None
1732 date = None
1736 date = None
1733 hgpatch = False
1737 hgpatch = False
1734
1738
1735 p = email.Parser.Parser()
1739 p = email.Parser.Parser()
1736 if pf == '-':
1740 if pf == '-':
1737 msg = p.parse(sys.stdin)
1741 msg = p.parse(sys.stdin)
1738 ui.status(_("applying patch from stdin\n"))
1742 ui.status(_("applying patch from stdin\n"))
1739 else:
1743 else:
1740 msg = p.parse(file(pf))
1744 msg = p.parse(file(pf))
1741 ui.status(_("applying %s\n") % patch)
1745 ui.status(_("applying %s\n") % patch)
1742
1746
1743 fd, tmpname = tempfile.mkstemp(prefix='hg-patch-')
1747 fd, tmpname = tempfile.mkstemp(prefix='hg-patch-')
1744 tmpfp = os.fdopen(fd, 'w')
1748 tmpfp = os.fdopen(fd, 'w')
1745 try:
1749 try:
1746 message = msg['Subject']
1750 message = msg['Subject']
1747 if message:
1751 if message:
1748 message = message.replace('\n\t', ' ')
1752 message = message.replace('\n\t', ' ')
1749 ui.debug('Subject: %s\n' % message)
1753 ui.debug('Subject: %s\n' % message)
1750 user = msg['From']
1754 user = msg['From']
1751 if user:
1755 if user:
1752 ui.debug('From: %s\n' % user)
1756 ui.debug('From: %s\n' % user)
1753 diffs_seen = 0
1757 diffs_seen = 0
1754 ok_types = ('text/plain', 'text/x-patch')
1758 ok_types = ('text/plain', 'text/x-patch')
1755 for part in msg.walk():
1759 for part in msg.walk():
1756 content_type = part.get_content_type()
1760 content_type = part.get_content_type()
1757 ui.debug('Content-Type: %s\n' % content_type)
1761 ui.debug('Content-Type: %s\n' % content_type)
1758 if content_type not in ok_types:
1762 if content_type not in ok_types:
1759 continue
1763 continue
1760 payload = part.get_payload(decode=True)
1764 payload = part.get_payload(decode=True)
1761 m = diffre.search(payload)
1765 m = diffre.search(payload)
1762 if m:
1766 if m:
1763 ui.debug(_('found patch at byte %d\n') % m.start(0))
1767 ui.debug(_('found patch at byte %d\n') % m.start(0))
1764 diffs_seen += 1
1768 diffs_seen += 1
1765 hgpatch = False
1769 hgpatch = False
1766 fp = cStringIO.StringIO()
1770 fp = cStringIO.StringIO()
1767 if message:
1771 if message:
1768 fp.write(message)
1772 fp.write(message)
1769 fp.write('\n')
1773 fp.write('\n')
1770 for line in payload[:m.start(0)].splitlines():
1774 for line in payload[:m.start(0)].splitlines():
1771 if line.startswith('# HG changeset patch'):
1775 if line.startswith('# HG changeset patch'):
1772 ui.debug(_('patch generated by hg export\n'))
1776 ui.debug(_('patch generated by hg export\n'))
1773 hgpatch = True
1777 hgpatch = True
1774 # drop earlier commit message content
1778 # drop earlier commit message content
1775 fp.seek(0)
1779 fp.seek(0)
1776 fp.truncate()
1780 fp.truncate()
1777 elif hgpatch:
1781 elif hgpatch:
1778 if line.startswith('# User '):
1782 if line.startswith('# User '):
1779 user = line[7:]
1783 user = line[7:]
1780 ui.debug('From: %s\n' % user)
1784 ui.debug('From: %s\n' % user)
1781 elif line.startswith("# Date "):
1785 elif line.startswith("# Date "):
1782 date = line[7:]
1786 date = line[7:]
1783 if not line.startswith('# '):
1787 if not line.startswith('# '):
1784 fp.write(line)
1788 fp.write(line)
1785 fp.write('\n')
1789 fp.write('\n')
1786 message = fp.getvalue()
1790 message = fp.getvalue()
1787 if tmpfp:
1791 if tmpfp:
1788 tmpfp.write(payload)
1792 tmpfp.write(payload)
1789 if not payload.endswith('\n'):
1793 if not payload.endswith('\n'):
1790 tmpfp.write('\n')
1794 tmpfp.write('\n')
1791 elif not diffs_seen and message and content_type == 'text/plain':
1795 elif not diffs_seen and message and content_type == 'text/plain':
1792 message += '\n' + payload
1796 message += '\n' + payload
1793
1797
1794 if opts['message']:
1798 if opts['message']:
1795 # pickup the cmdline msg
1799 # pickup the cmdline msg
1796 message = opts['message']
1800 message = opts['message']
1797 elif message:
1801 elif message:
1798 # pickup the patch msg
1802 # pickup the patch msg
1799 message = message.strip()
1803 message = message.strip()
1800 else:
1804 else:
1801 # launch the editor
1805 # launch the editor
1802 message = None
1806 message = None
1803 ui.debug(_('message:\n%s\n') % message)
1807 ui.debug(_('message:\n%s\n') % message)
1804
1808
1805 tmpfp.close()
1809 tmpfp.close()
1806 if not diffs_seen:
1810 if not diffs_seen:
1807 raise util.Abort(_('no diffs found'))
1811 raise util.Abort(_('no diffs found'))
1808
1812
1809 files = util.patch(strip, tmpname, ui)
1813 files = util.patch(strip, tmpname, ui)
1810 if len(files) > 0:
1814 if len(files) > 0:
1811 addremove_lock(ui, repo, files, {})
1815 addremove_lock(ui, repo, files, {})
1812 repo.commit(files, message, user, date)
1816 repo.commit(files, message, user, date)
1813 finally:
1817 finally:
1814 os.unlink(tmpname)
1818 os.unlink(tmpname)
1815
1819
1816 def incoming(ui, repo, source="default", **opts):
1820 def incoming(ui, repo, source="default", **opts):
1817 """show new changesets found in source
1821 """show new changesets found in source
1818
1822
1819 Show new changesets found in the specified path/URL or the default
1823 Show new changesets found in the specified path/URL or the default
1820 pull location. These are the changesets that would be pulled if a pull
1824 pull location. These are the changesets that would be pulled if a pull
1821 was requested.
1825 was requested.
1822
1826
1823 For remote repository, using --bundle avoids downloading the changesets
1827 For remote repository, using --bundle avoids downloading the changesets
1824 twice if the incoming is followed by a pull.
1828 twice if the incoming is followed by a pull.
1825
1829
1826 See pull for valid source format details.
1830 See pull for valid source format details.
1827 """
1831 """
1828 source = ui.expandpath(source)
1832 source = ui.expandpath(source)
1829 ui.setconfig_remoteopts(**opts)
1833 ui.setconfig_remoteopts(**opts)
1830
1834
1831 other = hg.repository(ui, source)
1835 other = hg.repository(ui, source)
1832 incoming = repo.findincoming(other, force=opts["force"])
1836 incoming = repo.findincoming(other, force=opts["force"])
1833 if not incoming:
1837 if not incoming:
1834 ui.status(_("no changes found\n"))
1838 ui.status(_("no changes found\n"))
1835 return
1839 return
1836
1840
1837 cleanup = None
1841 cleanup = None
1838 try:
1842 try:
1839 fname = opts["bundle"]
1843 fname = opts["bundle"]
1840 if fname or not other.local():
1844 if fname or not other.local():
1841 # create a bundle (uncompressed if other repo is not local)
1845 # create a bundle (uncompressed if other repo is not local)
1842 cg = other.changegroup(incoming, "incoming")
1846 cg = other.changegroup(incoming, "incoming")
1843 fname = cleanup = write_bundle(cg, fname, compress=other.local())
1847 fname = cleanup = write_bundle(cg, fname, compress=other.local())
1844 # keep written bundle?
1848 # keep written bundle?
1845 if opts["bundle"]:
1849 if opts["bundle"]:
1846 cleanup = None
1850 cleanup = None
1847 if not other.local():
1851 if not other.local():
1848 # use the created uncompressed bundlerepo
1852 # use the created uncompressed bundlerepo
1849 other = bundlerepo.bundlerepository(ui, repo.root, fname)
1853 other = bundlerepo.bundlerepository(ui, repo.root, fname)
1850
1854
1851 revs = None
1855 revs = None
1852 if opts['rev']:
1856 if opts['rev']:
1853 revs = [other.lookup(rev) for rev in opts['rev']]
1857 revs = [other.lookup(rev) for rev in opts['rev']]
1854 o = other.changelog.nodesbetween(incoming, revs)[0]
1858 o = other.changelog.nodesbetween(incoming, revs)[0]
1855 if opts['newest_first']:
1859 if opts['newest_first']:
1856 o.reverse()
1860 o.reverse()
1857 displayer = show_changeset(ui, other, opts)
1861 displayer = show_changeset(ui, other, opts)
1858 for n in o:
1862 for n in o:
1859 parents = [p for p in other.changelog.parents(n) if p != nullid]
1863 parents = [p for p in other.changelog.parents(n) if p != nullid]
1860 if opts['no_merges'] and len(parents) == 2:
1864 if opts['no_merges'] and len(parents) == 2:
1861 continue
1865 continue
1862 displayer.show(changenode=n)
1866 displayer.show(changenode=n)
1863 if opts['patch']:
1867 if opts['patch']:
1864 prev = (parents and parents[0]) or nullid
1868 prev = (parents and parents[0]) or nullid
1865 dodiff(ui, ui, other, prev, n)
1869 dodiff(ui, ui, other, prev, n)
1866 ui.write("\n")
1870 ui.write("\n")
1867 finally:
1871 finally:
1868 if hasattr(other, 'close'):
1872 if hasattr(other, 'close'):
1869 other.close()
1873 other.close()
1870 if cleanup:
1874 if cleanup:
1871 os.unlink(cleanup)
1875 os.unlink(cleanup)
1872
1876
1873 def init(ui, dest=".", **opts):
1877 def init(ui, dest=".", **opts):
1874 """create a new repository in the given directory
1878 """create a new repository in the given directory
1875
1879
1876 Initialize a new repository in the given directory. If the given
1880 Initialize a new repository in the given directory. If the given
1877 directory does not exist, it is created.
1881 directory does not exist, it is created.
1878
1882
1879 If no directory is given, the current directory is used.
1883 If no directory is given, the current directory is used.
1880
1884
1881 It is possible to specify an ssh:// URL as the destination.
1885 It is possible to specify an ssh:// URL as the destination.
1882 Look at the help text for the pull command for important details
1886 Look at the help text for the pull command for important details
1883 about ssh:// URLs.
1887 about ssh:// URLs.
1884 """
1888 """
1885 ui.setconfig_remoteopts(**opts)
1889 ui.setconfig_remoteopts(**opts)
1886 hg.repository(ui, dest, create=1)
1890 hg.repository(ui, dest, create=1)
1887
1891
1888 def locate(ui, repo, *pats, **opts):
1892 def locate(ui, repo, *pats, **opts):
1889 """locate files matching specific patterns
1893 """locate files matching specific patterns
1890
1894
1891 Print all files under Mercurial control whose names match the
1895 Print all files under Mercurial control whose names match the
1892 given patterns.
1896 given patterns.
1893
1897
1894 This command searches the current directory and its
1898 This command searches the current directory and its
1895 subdirectories. To search an entire repository, move to the root
1899 subdirectories. To search an entire repository, move to the root
1896 of the repository.
1900 of the repository.
1897
1901
1898 If no patterns are given to match, this command prints all file
1902 If no patterns are given to match, this command prints all file
1899 names.
1903 names.
1900
1904
1901 If you want to feed the output of this command into the "xargs"
1905 If you want to feed the output of this command into the "xargs"
1902 command, use the "-0" option to both this command and "xargs".
1906 command, use the "-0" option to both this command and "xargs".
1903 This will avoid the problem of "xargs" treating single filenames
1907 This will avoid the problem of "xargs" treating single filenames
1904 that contain white space as multiple filenames.
1908 that contain white space as multiple filenames.
1905 """
1909 """
1906 end = opts['print0'] and '\0' or '\n'
1910 end = opts['print0'] and '\0' or '\n'
1907 rev = opts['rev']
1911 rev = opts['rev']
1908 if rev:
1912 if rev:
1909 node = repo.lookup(rev)
1913 node = repo.lookup(rev)
1910 else:
1914 else:
1911 node = None
1915 node = None
1912
1916
1913 for src, abs, rel, exact in walk(repo, pats, opts, node=node,
1917 for src, abs, rel, exact in walk(repo, pats, opts, node=node,
1914 head='(?:.*/|)'):
1918 head='(?:.*/|)'):
1915 if not node and repo.dirstate.state(abs) == '?':
1919 if not node and repo.dirstate.state(abs) == '?':
1916 continue
1920 continue
1917 if opts['fullpath']:
1921 if opts['fullpath']:
1918 ui.write(os.path.join(repo.root, abs), end)
1922 ui.write(os.path.join(repo.root, abs), end)
1919 else:
1923 else:
1920 ui.write(((pats and rel) or abs), end)
1924 ui.write(((pats and rel) or abs), end)
1921
1925
1922 def log(ui, repo, *pats, **opts):
1926 def log(ui, repo, *pats, **opts):
1923 """show revision history of entire repository or files
1927 """show revision history of entire repository or files
1924
1928
1925 Print the revision history of the specified files or the entire project.
1929 Print the revision history of the specified files or the entire project.
1926
1930
1927 By default this command outputs: changeset id and hash, tags,
1931 By default this command outputs: changeset id and hash, tags,
1928 non-trivial parents, user, date and time, and a summary for each
1932 non-trivial parents, user, date and time, and a summary for each
1929 commit. When the -v/--verbose switch is used, the list of changed
1933 commit. When the -v/--verbose switch is used, the list of changed
1930 files and full commit message is shown.
1934 files and full commit message is shown.
1931 """
1935 """
1932 class dui(object):
1936 class dui(object):
1933 # Implement and delegate some ui protocol. Save hunks of
1937 # Implement and delegate some ui protocol. Save hunks of
1934 # output for later display in the desired order.
1938 # output for later display in the desired order.
1935 def __init__(self, ui):
1939 def __init__(self, ui):
1936 self.ui = ui
1940 self.ui = ui
1937 self.hunk = {}
1941 self.hunk = {}
1938 self.header = {}
1942 self.header = {}
1939 def bump(self, rev):
1943 def bump(self, rev):
1940 self.rev = rev
1944 self.rev = rev
1941 self.hunk[rev] = []
1945 self.hunk[rev] = []
1942 self.header[rev] = []
1946 self.header[rev] = []
1943 def note(self, *args):
1947 def note(self, *args):
1944 if self.verbose:
1948 if self.verbose:
1945 self.write(*args)
1949 self.write(*args)
1946 def status(self, *args):
1950 def status(self, *args):
1947 if not self.quiet:
1951 if not self.quiet:
1948 self.write(*args)
1952 self.write(*args)
1949 def write(self, *args):
1953 def write(self, *args):
1950 self.hunk[self.rev].append(args)
1954 self.hunk[self.rev].append(args)
1951 def write_header(self, *args):
1955 def write_header(self, *args):
1952 self.header[self.rev].append(args)
1956 self.header[self.rev].append(args)
1953 def debug(self, *args):
1957 def debug(self, *args):
1954 if self.debugflag:
1958 if self.debugflag:
1955 self.write(*args)
1959 self.write(*args)
1956 def __getattr__(self, key):
1960 def __getattr__(self, key):
1957 return getattr(self.ui, key)
1961 return getattr(self.ui, key)
1958
1962
1959 changeiter, getchange, matchfn = walkchangerevs(ui, repo, pats, opts)
1963 changeiter, getchange, matchfn = walkchangerevs(ui, repo, pats, opts)
1960
1964
1961 if opts['limit']:
1965 if opts['limit']:
1962 try:
1966 try:
1963 limit = int(opts['limit'])
1967 limit = int(opts['limit'])
1964 except ValueError:
1968 except ValueError:
1965 raise util.Abort(_('limit must be a positive integer'))
1969 raise util.Abort(_('limit must be a positive integer'))
1966 if limit <= 0: raise util.Abort(_('limit must be positive'))
1970 if limit <= 0: raise util.Abort(_('limit must be positive'))
1967 else:
1971 else:
1968 limit = sys.maxint
1972 limit = sys.maxint
1969 count = 0
1973 count = 0
1970
1974
1971 displayer = show_changeset(ui, repo, opts)
1975 displayer = show_changeset(ui, repo, opts)
1972 for st, rev, fns in changeiter:
1976 for st, rev, fns in changeiter:
1973 if st == 'window':
1977 if st == 'window':
1974 du = dui(ui)
1978 du = dui(ui)
1975 displayer.ui = du
1979 displayer.ui = du
1976 elif st == 'add':
1980 elif st == 'add':
1977 du.bump(rev)
1981 du.bump(rev)
1978 changenode = repo.changelog.node(rev)
1982 changenode = repo.changelog.node(rev)
1979 parents = [p for p in repo.changelog.parents(changenode)
1983 parents = [p for p in repo.changelog.parents(changenode)
1980 if p != nullid]
1984 if p != nullid]
1981 if opts['no_merges'] and len(parents) == 2:
1985 if opts['no_merges'] and len(parents) == 2:
1982 continue
1986 continue
1983 if opts['only_merges'] and len(parents) != 2:
1987 if opts['only_merges'] and len(parents) != 2:
1984 continue
1988 continue
1985
1989
1986 if opts['keyword']:
1990 if opts['keyword']:
1987 changes = getchange(rev)
1991 changes = getchange(rev)
1988 miss = 0
1992 miss = 0
1989 for k in [kw.lower() for kw in opts['keyword']]:
1993 for k in [kw.lower() for kw in opts['keyword']]:
1990 if not (k in changes[1].lower() or
1994 if not (k in changes[1].lower() or
1991 k in changes[4].lower() or
1995 k in changes[4].lower() or
1992 k in " ".join(changes[3][:20]).lower()):
1996 k in " ".join(changes[3][:20]).lower()):
1993 miss = 1
1997 miss = 1
1994 break
1998 break
1995 if miss:
1999 if miss:
1996 continue
2000 continue
1997
2001
1998 br = None
2002 br = None
1999 if opts['branches']:
2003 if opts['branches']:
2000 br = repo.branchlookup([repo.changelog.node(rev)])
2004 br = repo.branchlookup([repo.changelog.node(rev)])
2001
2005
2002 displayer.show(rev, brinfo=br)
2006 displayer.show(rev, brinfo=br)
2003 if opts['patch']:
2007 if opts['patch']:
2004 prev = (parents and parents[0]) or nullid
2008 prev = (parents and parents[0]) or nullid
2005 dodiff(du, du, repo, prev, changenode, match=matchfn)
2009 dodiff(du, du, repo, prev, changenode, match=matchfn)
2006 du.write("\n\n")
2010 du.write("\n\n")
2007 elif st == 'iter':
2011 elif st == 'iter':
2008 if count == limit: break
2012 if count == limit: break
2009 if du.header[rev]:
2013 if du.header[rev]:
2010 for args in du.header[rev]:
2014 for args in du.header[rev]:
2011 ui.write_header(*args)
2015 ui.write_header(*args)
2012 if du.hunk[rev]:
2016 if du.hunk[rev]:
2013 count += 1
2017 count += 1
2014 for args in du.hunk[rev]:
2018 for args in du.hunk[rev]:
2015 ui.write(*args)
2019 ui.write(*args)
2016
2020
2017 def manifest(ui, repo, rev=None):
2021 def manifest(ui, repo, rev=None):
2018 """output the latest or given revision of the project manifest
2022 """output the latest or given revision of the project manifest
2019
2023
2020 Print a list of version controlled files for the given revision.
2024 Print a list of version controlled files for the given revision.
2021
2025
2022 The manifest is the list of files being version controlled. If no revision
2026 The manifest is the list of files being version controlled. If no revision
2023 is given then the tip is used.
2027 is given then the tip is used.
2024 """
2028 """
2025 if rev:
2029 if rev:
2026 try:
2030 try:
2027 # assume all revision numbers are for changesets
2031 # assume all revision numbers are for changesets
2028 n = repo.lookup(rev)
2032 n = repo.lookup(rev)
2029 change = repo.changelog.read(n)
2033 change = repo.changelog.read(n)
2030 n = change[0]
2034 n = change[0]
2031 except hg.RepoError:
2035 except hg.RepoError:
2032 n = repo.manifest.lookup(rev)
2036 n = repo.manifest.lookup(rev)
2033 else:
2037 else:
2034 n = repo.manifest.tip()
2038 n = repo.manifest.tip()
2035 m = repo.manifest.read(n)
2039 m = repo.manifest.read(n)
2036 mf = repo.manifest.readflags(n)
2040 mf = repo.manifest.readflags(n)
2037 files = m.keys()
2041 files = m.keys()
2038 files.sort()
2042 files.sort()
2039
2043
2040 for f in files:
2044 for f in files:
2041 ui.write("%40s %3s %s\n" % (hex(m[f]), mf[f] and "755" or "644", f))
2045 ui.write("%40s %3s %s\n" % (hex(m[f]), mf[f] and "755" or "644", f))
2042
2046
2043 def merge(ui, repo, node=None, **opts):
2047 def merge(ui, repo, node=None, **opts):
2044 """Merge working directory with another revision
2048 """Merge working directory with another revision
2045
2049
2046 Merge the contents of the current working directory and the
2050 Merge the contents of the current working directory and the
2047 requested revision. Files that changed between either parent are
2051 requested revision. Files that changed between either parent are
2048 marked as changed for the next commit and a commit must be
2052 marked as changed for the next commit and a commit must be
2049 performed before any further updates are allowed.
2053 performed before any further updates are allowed.
2050 """
2054 """
2051 return doupdate(ui, repo, node=node, merge=True, **opts)
2055 return doupdate(ui, repo, node=node, merge=True, **opts)
2052
2056
2053 def outgoing(ui, repo, dest=None, **opts):
2057 def outgoing(ui, repo, dest=None, **opts):
2054 """show changesets not found in destination
2058 """show changesets not found in destination
2055
2059
2056 Show changesets not found in the specified destination repository or
2060 Show changesets not found in the specified destination repository or
2057 the default push location. These are the changesets that would be pushed
2061 the default push location. These are the changesets that would be pushed
2058 if a push was requested.
2062 if a push was requested.
2059
2063
2060 See pull for valid destination format details.
2064 See pull for valid destination format details.
2061 """
2065 """
2062 dest = ui.expandpath(dest or 'default-push', dest or 'default')
2066 dest = ui.expandpath(dest or 'default-push', dest or 'default')
2063 ui.setconfig_remoteopts(**opts)
2067 ui.setconfig_remoteopts(**opts)
2064 revs = None
2068 revs = None
2065 if opts['rev']:
2069 if opts['rev']:
2066 revs = [repo.lookup(rev) for rev in opts['rev']]
2070 revs = [repo.lookup(rev) for rev in opts['rev']]
2067
2071
2068 other = hg.repository(ui, dest)
2072 other = hg.repository(ui, dest)
2069 o = repo.findoutgoing(other, force=opts['force'])
2073 o = repo.findoutgoing(other, force=opts['force'])
2070 if not o:
2074 if not o:
2071 ui.status(_("no changes found\n"))
2075 ui.status(_("no changes found\n"))
2072 return
2076 return
2073 o = repo.changelog.nodesbetween(o, revs)[0]
2077 o = repo.changelog.nodesbetween(o, revs)[0]
2074 if opts['newest_first']:
2078 if opts['newest_first']:
2075 o.reverse()
2079 o.reverse()
2076 displayer = show_changeset(ui, repo, opts)
2080 displayer = show_changeset(ui, repo, opts)
2077 for n in o:
2081 for n in o:
2078 parents = [p for p in repo.changelog.parents(n) if p != nullid]
2082 parents = [p for p in repo.changelog.parents(n) if p != nullid]
2079 if opts['no_merges'] and len(parents) == 2:
2083 if opts['no_merges'] and len(parents) == 2:
2080 continue
2084 continue
2081 displayer.show(changenode=n)
2085 displayer.show(changenode=n)
2082 if opts['patch']:
2086 if opts['patch']:
2083 prev = (parents and parents[0]) or nullid
2087 prev = (parents and parents[0]) or nullid
2084 dodiff(ui, ui, repo, prev, n)
2088 dodiff(ui, ui, repo, prev, n)
2085 ui.write("\n")
2089 ui.write("\n")
2086
2090
2087 def parents(ui, repo, rev=None, branches=None, **opts):
2091 def parents(ui, repo, rev=None, branches=None, **opts):
2088 """show the parents of the working dir or revision
2092 """show the parents of the working dir or revision
2089
2093
2090 Print the working directory's parent revisions.
2094 Print the working directory's parent revisions.
2091 """
2095 """
2092 if rev:
2096 if rev:
2093 p = repo.changelog.parents(repo.lookup(rev))
2097 p = repo.changelog.parents(repo.lookup(rev))
2094 else:
2098 else:
2095 p = repo.dirstate.parents()
2099 p = repo.dirstate.parents()
2096
2100
2097 br = None
2101 br = None
2098 if branches is not None:
2102 if branches is not None:
2099 br = repo.branchlookup(p)
2103 br = repo.branchlookup(p)
2100 displayer = show_changeset(ui, repo, opts)
2104 displayer = show_changeset(ui, repo, opts)
2101 for n in p:
2105 for n in p:
2102 if n != nullid:
2106 if n != nullid:
2103 displayer.show(changenode=n, brinfo=br)
2107 displayer.show(changenode=n, brinfo=br)
2104
2108
2105 def paths(ui, repo, search=None):
2109 def paths(ui, repo, search=None):
2106 """show definition of symbolic path names
2110 """show definition of symbolic path names
2107
2111
2108 Show definition of symbolic path name NAME. If no name is given, show
2112 Show definition of symbolic path name NAME. If no name is given, show
2109 definition of available names.
2113 definition of available names.
2110
2114
2111 Path names are defined in the [paths] section of /etc/mercurial/hgrc
2115 Path names are defined in the [paths] section of /etc/mercurial/hgrc
2112 and $HOME/.hgrc. If run inside a repository, .hg/hgrc is used, too.
2116 and $HOME/.hgrc. If run inside a repository, .hg/hgrc is used, too.
2113 """
2117 """
2114 if search:
2118 if search:
2115 for name, path in ui.configitems("paths"):
2119 for name, path in ui.configitems("paths"):
2116 if name == search:
2120 if name == search:
2117 ui.write("%s\n" % path)
2121 ui.write("%s\n" % path)
2118 return
2122 return
2119 ui.warn(_("not found!\n"))
2123 ui.warn(_("not found!\n"))
2120 return 1
2124 return 1
2121 else:
2125 else:
2122 for name, path in ui.configitems("paths"):
2126 for name, path in ui.configitems("paths"):
2123 ui.write("%s = %s\n" % (name, path))
2127 ui.write("%s = %s\n" % (name, path))
2124
2128
2125 def postincoming(ui, repo, modheads, optupdate):
2129 def postincoming(ui, repo, modheads, optupdate):
2126 if modheads == 0:
2130 if modheads == 0:
2127 return
2131 return
2128 if optupdate:
2132 if optupdate:
2129 if modheads == 1:
2133 if modheads == 1:
2130 return doupdate(ui, repo)
2134 return doupdate(ui, repo)
2131 else:
2135 else:
2132 ui.status(_("not updating, since new heads added\n"))
2136 ui.status(_("not updating, since new heads added\n"))
2133 if modheads > 1:
2137 if modheads > 1:
2134 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
2138 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
2135 else:
2139 else:
2136 ui.status(_("(run 'hg update' to get a working copy)\n"))
2140 ui.status(_("(run 'hg update' to get a working copy)\n"))
2137
2141
2138 def pull(ui, repo, source="default", **opts):
2142 def pull(ui, repo, source="default", **opts):
2139 """pull changes from the specified source
2143 """pull changes from the specified source
2140
2144
2141 Pull changes from a remote repository to a local one.
2145 Pull changes from a remote repository to a local one.
2142
2146
2143 This finds all changes from the repository at the specified path
2147 This finds all changes from the repository at the specified path
2144 or URL and adds them to the local repository. By default, this
2148 or URL and adds them to the local repository. By default, this
2145 does not update the copy of the project in the working directory.
2149 does not update the copy of the project in the working directory.
2146
2150
2147 Valid URLs are of the form:
2151 Valid URLs are of the form:
2148
2152
2149 local/filesystem/path
2153 local/filesystem/path
2150 http://[user@]host[:port]/[path]
2154 http://[user@]host[:port]/[path]
2151 https://[user@]host[:port]/[path]
2155 https://[user@]host[:port]/[path]
2152 ssh://[user@]host[:port]/[path]
2156 ssh://[user@]host[:port]/[path]
2153
2157
2154 Some notes about using SSH with Mercurial:
2158 Some notes about using SSH with Mercurial:
2155 - SSH requires an accessible shell account on the destination machine
2159 - SSH requires an accessible shell account on the destination machine
2156 and a copy of hg in the remote path or specified with as remotecmd.
2160 and a copy of hg in the remote path or specified with as remotecmd.
2157 - path is relative to the remote user's home directory by default.
2161 - path is relative to the remote user's home directory by default.
2158 Use an extra slash at the start of a path to specify an absolute path:
2162 Use an extra slash at the start of a path to specify an absolute path:
2159 ssh://example.com//tmp/repository
2163 ssh://example.com//tmp/repository
2160 - Mercurial doesn't use its own compression via SSH; the right thing
2164 - Mercurial doesn't use its own compression via SSH; the right thing
2161 to do is to configure it in your ~/.ssh/ssh_config, e.g.:
2165 to do is to configure it in your ~/.ssh/ssh_config, e.g.:
2162 Host *.mylocalnetwork.example.com
2166 Host *.mylocalnetwork.example.com
2163 Compression off
2167 Compression off
2164 Host *
2168 Host *
2165 Compression on
2169 Compression on
2166 Alternatively specify "ssh -C" as your ssh command in your hgrc or
2170 Alternatively specify "ssh -C" as your ssh command in your hgrc or
2167 with the --ssh command line option.
2171 with the --ssh command line option.
2168 """
2172 """
2169 source = ui.expandpath(source)
2173 source = ui.expandpath(source)
2170 ui.setconfig_remoteopts(**opts)
2174 ui.setconfig_remoteopts(**opts)
2171
2175
2172 other = hg.repository(ui, source)
2176 other = hg.repository(ui, source)
2173 ui.status(_('pulling from %s\n') % (source))
2177 ui.status(_('pulling from %s\n') % (source))
2174 revs = None
2178 revs = None
2175 if opts['rev'] and not other.local():
2179 if opts['rev'] and not other.local():
2176 raise util.Abort(_("pull -r doesn't work for remote repositories yet"))
2180 raise util.Abort(_("pull -r doesn't work for remote repositories yet"))
2177 elif opts['rev']:
2181 elif opts['rev']:
2178 revs = [other.lookup(rev) for rev in opts['rev']]
2182 revs = [other.lookup(rev) for rev in opts['rev']]
2179 modheads = repo.pull(other, heads=revs, force=opts['force'])
2183 modheads = repo.pull(other, heads=revs, force=opts['force'])
2180 return postincoming(ui, repo, modheads, opts['update'])
2184 return postincoming(ui, repo, modheads, opts['update'])
2181
2185
2182 def push(ui, repo, dest=None, **opts):
2186 def push(ui, repo, dest=None, **opts):
2183 """push changes to the specified destination
2187 """push changes to the specified destination
2184
2188
2185 Push changes from the local repository to the given destination.
2189 Push changes from the local repository to the given destination.
2186
2190
2187 This is the symmetrical operation for pull. It helps to move
2191 This is the symmetrical operation for pull. It helps to move
2188 changes from the current repository to a different one. If the
2192 changes from the current repository to a different one. If the
2189 destination is local this is identical to a pull in that directory
2193 destination is local this is identical to a pull in that directory
2190 from the current one.
2194 from the current one.
2191
2195
2192 By default, push will refuse to run if it detects the result would
2196 By default, push will refuse to run if it detects the result would
2193 increase the number of remote heads. This generally indicates the
2197 increase the number of remote heads. This generally indicates the
2194 the client has forgotten to sync and merge before pushing.
2198 the client has forgotten to sync and merge before pushing.
2195
2199
2196 Valid URLs are of the form:
2200 Valid URLs are of the form:
2197
2201
2198 local/filesystem/path
2202 local/filesystem/path
2199 ssh://[user@]host[:port]/[path]
2203 ssh://[user@]host[:port]/[path]
2200
2204
2201 Look at the help text for the pull command for important details
2205 Look at the help text for the pull command for important details
2202 about ssh:// URLs.
2206 about ssh:// URLs.
2203
2207
2204 Pushing to http:// and https:// URLs is possible, too, if this
2208 Pushing to http:// and https:// URLs is possible, too, if this
2205 feature is enabled on the remote Mercurial server.
2209 feature is enabled on the remote Mercurial server.
2206 """
2210 """
2207 dest = ui.expandpath(dest or 'default-push', dest or 'default')
2211 dest = ui.expandpath(dest or 'default-push', dest or 'default')
2208 ui.setconfig_remoteopts(**opts)
2212 ui.setconfig_remoteopts(**opts)
2209
2213
2210 other = hg.repository(ui, dest)
2214 other = hg.repository(ui, dest)
2211 ui.status('pushing to %s\n' % (dest))
2215 ui.status('pushing to %s\n' % (dest))
2212 revs = None
2216 revs = None
2213 if opts['rev']:
2217 if opts['rev']:
2214 revs = [repo.lookup(rev) for rev in opts['rev']]
2218 revs = [repo.lookup(rev) for rev in opts['rev']]
2215 r = repo.push(other, opts['force'], revs=revs)
2219 r = repo.push(other, opts['force'], revs=revs)
2216 return r == 0
2220 return r == 0
2217
2221
2218 def rawcommit(ui, repo, *flist, **rc):
2222 def rawcommit(ui, repo, *flist, **rc):
2219 """raw commit interface (DEPRECATED)
2223 """raw commit interface (DEPRECATED)
2220
2224
2221 (DEPRECATED)
2225 (DEPRECATED)
2222 Lowlevel commit, for use in helper scripts.
2226 Lowlevel commit, for use in helper scripts.
2223
2227
2224 This command is not intended to be used by normal users, as it is
2228 This command is not intended to be used by normal users, as it is
2225 primarily useful for importing from other SCMs.
2229 primarily useful for importing from other SCMs.
2226
2230
2227 This command is now deprecated and will be removed in a future
2231 This command is now deprecated and will be removed in a future
2228 release, please use debugsetparents and commit instead.
2232 release, please use debugsetparents and commit instead.
2229 """
2233 """
2230
2234
2231 ui.warn(_("(the rawcommit command is deprecated)\n"))
2235 ui.warn(_("(the rawcommit command is deprecated)\n"))
2232
2236
2233 message = rc['message']
2237 message = rc['message']
2234 if not message and rc['logfile']:
2238 if not message and rc['logfile']:
2235 try:
2239 try:
2236 message = open(rc['logfile']).read()
2240 message = open(rc['logfile']).read()
2237 except IOError:
2241 except IOError:
2238 pass
2242 pass
2239 if not message and not rc['logfile']:
2243 if not message and not rc['logfile']:
2240 raise util.Abort(_("missing commit message"))
2244 raise util.Abort(_("missing commit message"))
2241
2245
2242 files = relpath(repo, list(flist))
2246 files = relpath(repo, list(flist))
2243 if rc['files']:
2247 if rc['files']:
2244 files += open(rc['files']).read().splitlines()
2248 files += open(rc['files']).read().splitlines()
2245
2249
2246 rc['parent'] = map(repo.lookup, rc['parent'])
2250 rc['parent'] = map(repo.lookup, rc['parent'])
2247
2251
2248 try:
2252 try:
2249 repo.rawcommit(files, message, rc['user'], rc['date'], *rc['parent'])
2253 repo.rawcommit(files, message, rc['user'], rc['date'], *rc['parent'])
2250 except ValueError, inst:
2254 except ValueError, inst:
2251 raise util.Abort(str(inst))
2255 raise util.Abort(str(inst))
2252
2256
2253 def recover(ui, repo):
2257 def recover(ui, repo):
2254 """roll back an interrupted transaction
2258 """roll back an interrupted transaction
2255
2259
2256 Recover from an interrupted commit or pull.
2260 Recover from an interrupted commit or pull.
2257
2261
2258 This command tries to fix the repository status after an interrupted
2262 This command tries to fix the repository status after an interrupted
2259 operation. It should only be necessary when Mercurial suggests it.
2263 operation. It should only be necessary when Mercurial suggests it.
2260 """
2264 """
2261 if repo.recover():
2265 if repo.recover():
2262 return repo.verify()
2266 return repo.verify()
2263 return 1
2267 return 1
2264
2268
2265 def remove(ui, repo, *pats, **opts):
2269 def remove(ui, repo, *pats, **opts):
2266 """remove the specified files on the next commit
2270 """remove the specified files on the next commit
2267
2271
2268 Schedule the indicated files for removal from the repository.
2272 Schedule the indicated files for removal from the repository.
2269
2273
2270 This command schedules the files to be removed at the next commit.
2274 This command schedules the files to be removed at the next commit.
2271 This only removes files from the current branch, not from the
2275 This only removes files from the current branch, not from the
2272 entire project history. If the files still exist in the working
2276 entire project history. If the files still exist in the working
2273 directory, they will be deleted from it. If invoked with --after,
2277 directory, they will be deleted from it. If invoked with --after,
2274 files that have been manually deleted are marked as removed.
2278 files that have been manually deleted are marked as removed.
2275
2279
2276 Modified files and added files are not removed by default. To
2280 Modified files and added files are not removed by default. To
2277 remove them, use the -f/--force option.
2281 remove them, use the -f/--force option.
2278 """
2282 """
2279 names = []
2283 names = []
2280 if not opts['after'] and not pats:
2284 if not opts['after'] and not pats:
2281 raise util.Abort(_('no files specified'))
2285 raise util.Abort(_('no files specified'))
2282 files, matchfn, anypats = matchpats(repo, pats, opts)
2286 files, matchfn, anypats = matchpats(repo, pats, opts)
2283 exact = dict.fromkeys(files)
2287 exact = dict.fromkeys(files)
2284 mardu = map(dict.fromkeys, repo.changes(files=files, match=matchfn))
2288 mardu = map(dict.fromkeys, repo.changes(files=files, match=matchfn))
2285 modified, added, removed, deleted, unknown = mardu
2289 modified, added, removed, deleted, unknown = mardu
2286 remove, forget = [], []
2290 remove, forget = [], []
2287 for src, abs, rel, exact in walk(repo, pats, opts):
2291 for src, abs, rel, exact in walk(repo, pats, opts):
2288 reason = None
2292 reason = None
2289 if abs not in deleted and opts['after']:
2293 if abs not in deleted and opts['after']:
2290 reason = _('is still present')
2294 reason = _('is still present')
2291 elif abs in modified and not opts['force']:
2295 elif abs in modified and not opts['force']:
2292 reason = _('is modified (use -f to force removal)')
2296 reason = _('is modified (use -f to force removal)')
2293 elif abs in added:
2297 elif abs in added:
2294 if opts['force']:
2298 if opts['force']:
2295 forget.append(abs)
2299 forget.append(abs)
2296 continue
2300 continue
2297 reason = _('has been marked for add (use -f to force removal)')
2301 reason = _('has been marked for add (use -f to force removal)')
2298 elif abs in unknown:
2302 elif abs in unknown:
2299 reason = _('is not managed')
2303 reason = _('is not managed')
2300 elif abs in removed:
2304 elif abs in removed:
2301 continue
2305 continue
2302 if reason:
2306 if reason:
2303 if exact:
2307 if exact:
2304 ui.warn(_('not removing %s: file %s\n') % (rel, reason))
2308 ui.warn(_('not removing %s: file %s\n') % (rel, reason))
2305 else:
2309 else:
2306 if ui.verbose or not exact:
2310 if ui.verbose or not exact:
2307 ui.status(_('removing %s\n') % rel)
2311 ui.status(_('removing %s\n') % rel)
2308 remove.append(abs)
2312 remove.append(abs)
2309 repo.forget(forget)
2313 repo.forget(forget)
2310 repo.remove(remove, unlink=not opts['after'])
2314 repo.remove(remove, unlink=not opts['after'])
2311
2315
2312 def rename(ui, repo, *pats, **opts):
2316 def rename(ui, repo, *pats, **opts):
2313 """rename files; equivalent of copy + remove
2317 """rename files; equivalent of copy + remove
2314
2318
2315 Mark dest as copies of sources; mark sources for deletion. If
2319 Mark dest as copies of sources; mark sources for deletion. If
2316 dest is a directory, copies are put in that directory. If dest is
2320 dest is a directory, copies are put in that directory. If dest is
2317 a file, there can only be one source.
2321 a file, there can only be one source.
2318
2322
2319 By default, this command copies the contents of files as they
2323 By default, this command copies the contents of files as they
2320 stand in the working directory. If invoked with --after, the
2324 stand in the working directory. If invoked with --after, the
2321 operation is recorded, but no copying is performed.
2325 operation is recorded, but no copying is performed.
2322
2326
2323 This command takes effect in the next commit.
2327 This command takes effect in the next commit.
2324
2328
2325 NOTE: This command should be treated as experimental. While it
2329 NOTE: This command should be treated as experimental. While it
2326 should properly record rename files, this information is not yet
2330 should properly record rename files, this information is not yet
2327 fully used by merge, nor fully reported by log.
2331 fully used by merge, nor fully reported by log.
2328 """
2332 """
2329 wlock = repo.wlock(0)
2333 wlock = repo.wlock(0)
2330 errs, copied = docopy(ui, repo, pats, opts, wlock)
2334 errs, copied = docopy(ui, repo, pats, opts, wlock)
2331 names = []
2335 names = []
2332 for abs, rel, exact in copied:
2336 for abs, rel, exact in copied:
2333 if ui.verbose or not exact:
2337 if ui.verbose or not exact:
2334 ui.status(_('removing %s\n') % rel)
2338 ui.status(_('removing %s\n') % rel)
2335 names.append(abs)
2339 names.append(abs)
2336 if not opts.get('dry_run'):
2340 if not opts.get('dry_run'):
2337 repo.remove(names, True, wlock)
2341 repo.remove(names, True, wlock)
2338 return errs
2342 return errs
2339
2343
2340 def revert(ui, repo, *pats, **opts):
2344 def revert(ui, repo, *pats, **opts):
2341 """revert files or dirs to their states as of some revision
2345 """revert files or dirs to their states as of some revision
2342
2346
2343 With no revision specified, revert the named files or directories
2347 With no revision specified, revert the named files or directories
2344 to the contents they had in the parent of the working directory.
2348 to the contents they had in the parent of the working directory.
2345 This restores the contents of the affected files to an unmodified
2349 This restores the contents of the affected files to an unmodified
2346 state. If the working directory has two parents, you must
2350 state. If the working directory has two parents, you must
2347 explicitly specify the revision to revert to.
2351 explicitly specify the revision to revert to.
2348
2352
2349 Modified files are saved with a .orig suffix before reverting.
2353 Modified files are saved with a .orig suffix before reverting.
2350 To disable these backups, use --no-backup.
2354 To disable these backups, use --no-backup.
2351
2355
2352 Using the -r option, revert the given files or directories to
2356 Using the -r option, revert the given files or directories to
2353 their contents as of a specific revision. This can be helpful to"roll
2357 their contents as of a specific revision. This can be helpful to"roll
2354 back" some or all of a change that should not have been committed.
2358 back" some or all of a change that should not have been committed.
2355
2359
2356 Revert modifies the working directory. It does not commit any
2360 Revert modifies the working directory. It does not commit any
2357 changes, or change the parent of the working directory. If you
2361 changes, or change the parent of the working directory. If you
2358 revert to a revision other than the parent of the working
2362 revert to a revision other than the parent of the working
2359 directory, the reverted files will thus appear modified
2363 directory, the reverted files will thus appear modified
2360 afterwards.
2364 afterwards.
2361
2365
2362 If a file has been deleted, it is recreated. If the executable
2366 If a file has been deleted, it is recreated. If the executable
2363 mode of a file was changed, it is reset.
2367 mode of a file was changed, it is reset.
2364
2368
2365 If names are given, all files matching the names are reverted.
2369 If names are given, all files matching the names are reverted.
2366
2370
2367 If no arguments are given, all files in the repository are reverted.
2371 If no arguments are given, all files in the repository are reverted.
2368 """
2372 """
2369 parent, p2 = repo.dirstate.parents()
2373 parent, p2 = repo.dirstate.parents()
2370 if opts['rev']:
2374 if opts['rev']:
2371 node = repo.lookup(opts['rev'])
2375 node = repo.lookup(opts['rev'])
2372 elif p2 != nullid:
2376 elif p2 != nullid:
2373 raise util.Abort(_('working dir has two parents; '
2377 raise util.Abort(_('working dir has two parents; '
2374 'you must specify the revision to revert to'))
2378 'you must specify the revision to revert to'))
2375 else:
2379 else:
2376 node = parent
2380 node = parent
2377 mf = repo.manifest.read(repo.changelog.read(node)[0])
2381 mf = repo.manifest.read(repo.changelog.read(node)[0])
2378 if node == parent:
2382 if node == parent:
2379 pmf = mf
2383 pmf = mf
2380 else:
2384 else:
2381 pmf = None
2385 pmf = None
2382
2386
2383 wlock = repo.wlock()
2387 wlock = repo.wlock()
2384
2388
2385 # need all matching names in dirstate and manifest of target rev,
2389 # need all matching names in dirstate and manifest of target rev,
2386 # so have to walk both. do not print errors if files exist in one
2390 # so have to walk both. do not print errors if files exist in one
2387 # but not other.
2391 # but not other.
2388
2392
2389 names = {}
2393 names = {}
2390 target_only = {}
2394 target_only = {}
2391
2395
2392 # walk dirstate.
2396 # walk dirstate.
2393
2397
2394 for src, abs, rel, exact in walk(repo, pats, opts, badmatch=mf.has_key):
2398 for src, abs, rel, exact in walk(repo, pats, opts, badmatch=mf.has_key):
2395 names[abs] = (rel, exact)
2399 names[abs] = (rel, exact)
2396 if src == 'b':
2400 if src == 'b':
2397 target_only[abs] = True
2401 target_only[abs] = True
2398
2402
2399 # walk target manifest.
2403 # walk target manifest.
2400
2404
2401 for src, abs, rel, exact in walk(repo, pats, opts, node=node,
2405 for src, abs, rel, exact in walk(repo, pats, opts, node=node,
2402 badmatch=names.has_key):
2406 badmatch=names.has_key):
2403 if abs in names: continue
2407 if abs in names: continue
2404 names[abs] = (rel, exact)
2408 names[abs] = (rel, exact)
2405 target_only[abs] = True
2409 target_only[abs] = True
2406
2410
2407 changes = repo.changes(match=names.has_key, wlock=wlock)
2411 changes = repo.changes(match=names.has_key, wlock=wlock)
2408 modified, added, removed, deleted, unknown = map(dict.fromkeys, changes)
2412 modified, added, removed, deleted, unknown = map(dict.fromkeys, changes)
2409
2413
2410 revert = ([], _('reverting %s\n'))
2414 revert = ([], _('reverting %s\n'))
2411 add = ([], _('adding %s\n'))
2415 add = ([], _('adding %s\n'))
2412 remove = ([], _('removing %s\n'))
2416 remove = ([], _('removing %s\n'))
2413 forget = ([], _('forgetting %s\n'))
2417 forget = ([], _('forgetting %s\n'))
2414 undelete = ([], _('undeleting %s\n'))
2418 undelete = ([], _('undeleting %s\n'))
2415 update = {}
2419 update = {}
2416
2420
2417 disptable = (
2421 disptable = (
2418 # dispatch table:
2422 # dispatch table:
2419 # file state
2423 # file state
2420 # action if in target manifest
2424 # action if in target manifest
2421 # action if not in target manifest
2425 # action if not in target manifest
2422 # make backup if in target manifest
2426 # make backup if in target manifest
2423 # make backup if not in target manifest
2427 # make backup if not in target manifest
2424 (modified, revert, remove, True, True),
2428 (modified, revert, remove, True, True),
2425 (added, revert, forget, True, False),
2429 (added, revert, forget, True, False),
2426 (removed, undelete, None, False, False),
2430 (removed, undelete, None, False, False),
2427 (deleted, revert, remove, False, False),
2431 (deleted, revert, remove, False, False),
2428 (unknown, add, None, True, False),
2432 (unknown, add, None, True, False),
2429 (target_only, add, None, False, False),
2433 (target_only, add, None, False, False),
2430 )
2434 )
2431
2435
2432 entries = names.items()
2436 entries = names.items()
2433 entries.sort()
2437 entries.sort()
2434
2438
2435 for abs, (rel, exact) in entries:
2439 for abs, (rel, exact) in entries:
2436 mfentry = mf.get(abs)
2440 mfentry = mf.get(abs)
2437 def handle(xlist, dobackup):
2441 def handle(xlist, dobackup):
2438 xlist[0].append(abs)
2442 xlist[0].append(abs)
2439 update[abs] = 1
2443 update[abs] = 1
2440 if dobackup and not opts['no_backup'] and os.path.exists(rel):
2444 if dobackup and not opts['no_backup'] and os.path.exists(rel):
2441 bakname = "%s.orig" % rel
2445 bakname = "%s.orig" % rel
2442 ui.note(_('saving current version of %s as %s\n') %
2446 ui.note(_('saving current version of %s as %s\n') %
2443 (rel, bakname))
2447 (rel, bakname))
2444 if not opts.get('dry_run'):
2448 if not opts.get('dry_run'):
2445 shutil.copyfile(rel, bakname)
2449 shutil.copyfile(rel, bakname)
2446 shutil.copymode(rel, bakname)
2450 shutil.copymode(rel, bakname)
2447 if ui.verbose or not exact:
2451 if ui.verbose or not exact:
2448 ui.status(xlist[1] % rel)
2452 ui.status(xlist[1] % rel)
2449 for table, hitlist, misslist, backuphit, backupmiss in disptable:
2453 for table, hitlist, misslist, backuphit, backupmiss in disptable:
2450 if abs not in table: continue
2454 if abs not in table: continue
2451 # file has changed in dirstate
2455 # file has changed in dirstate
2452 if mfentry:
2456 if mfentry:
2453 handle(hitlist, backuphit)
2457 handle(hitlist, backuphit)
2454 elif misslist is not None:
2458 elif misslist is not None:
2455 handle(misslist, backupmiss)
2459 handle(misslist, backupmiss)
2456 else:
2460 else:
2457 if exact: ui.warn(_('file not managed: %s\n' % rel))
2461 if exact: ui.warn(_('file not managed: %s\n' % rel))
2458 break
2462 break
2459 else:
2463 else:
2460 # file has not changed in dirstate
2464 # file has not changed in dirstate
2461 if node == parent:
2465 if node == parent:
2462 if exact: ui.warn(_('no changes needed to %s\n' % rel))
2466 if exact: ui.warn(_('no changes needed to %s\n' % rel))
2463 continue
2467 continue
2464 if pmf is None:
2468 if pmf is None:
2465 # only need parent manifest in this unlikely case,
2469 # only need parent manifest in this unlikely case,
2466 # so do not read by default
2470 # so do not read by default
2467 pmf = repo.manifest.read(repo.changelog.read(parent)[0])
2471 pmf = repo.manifest.read(repo.changelog.read(parent)[0])
2468 if abs in pmf:
2472 if abs in pmf:
2469 if mfentry:
2473 if mfentry:
2470 # if version of file is same in parent and target
2474 # if version of file is same in parent and target
2471 # manifests, do nothing
2475 # manifests, do nothing
2472 if pmf[abs] != mfentry:
2476 if pmf[abs] != mfentry:
2473 handle(revert, False)
2477 handle(revert, False)
2474 else:
2478 else:
2475 handle(remove, False)
2479 handle(remove, False)
2476
2480
2477 if not opts.get('dry_run'):
2481 if not opts.get('dry_run'):
2478 repo.dirstate.forget(forget[0])
2482 repo.dirstate.forget(forget[0])
2479 r = repo.update(node, False, True, update.has_key, False, wlock=wlock,
2483 r = repo.update(node, False, True, update.has_key, False, wlock=wlock,
2480 show_stats=False)
2484 show_stats=False)
2481 repo.dirstate.update(add[0], 'a')
2485 repo.dirstate.update(add[0], 'a')
2482 repo.dirstate.update(undelete[0], 'n')
2486 repo.dirstate.update(undelete[0], 'n')
2483 repo.dirstate.update(remove[0], 'r')
2487 repo.dirstate.update(remove[0], 'r')
2484 return r
2488 return r
2485
2489
2486 def rollback(ui, repo):
2490 def rollback(ui, repo):
2487 """roll back the last transaction in this repository
2491 """roll back the last transaction in this repository
2488
2492
2489 Roll back the last transaction in this repository, restoring the
2493 Roll back the last transaction in this repository, restoring the
2490 project to its state prior to the transaction.
2494 project to its state prior to the transaction.
2491
2495
2492 Transactions are used to encapsulate the effects of all commands
2496 Transactions are used to encapsulate the effects of all commands
2493 that create new changesets or propagate existing changesets into a
2497 that create new changesets or propagate existing changesets into a
2494 repository. For example, the following commands are transactional,
2498 repository. For example, the following commands are transactional,
2495 and their effects can be rolled back:
2499 and their effects can be rolled back:
2496
2500
2497 commit
2501 commit
2498 import
2502 import
2499 pull
2503 pull
2500 push (with this repository as destination)
2504 push (with this repository as destination)
2501 unbundle
2505 unbundle
2502
2506
2503 This command should be used with care. There is only one level of
2507 This command should be used with care. There is only one level of
2504 rollback, and there is no way to undo a rollback.
2508 rollback, and there is no way to undo a rollback.
2505
2509
2506 This command is not intended for use on public repositories. Once
2510 This command is not intended for use on public repositories. Once
2507 changes are visible for pull by other users, rolling a transaction
2511 changes are visible for pull by other users, rolling a transaction
2508 back locally is ineffective (someone else may already have pulled
2512 back locally is ineffective (someone else may already have pulled
2509 the changes). Furthermore, a race is possible with readers of the
2513 the changes). Furthermore, a race is possible with readers of the
2510 repository; for example an in-progress pull from the repository
2514 repository; for example an in-progress pull from the repository
2511 may fail if a rollback is performed.
2515 may fail if a rollback is performed.
2512 """
2516 """
2513 repo.rollback()
2517 repo.rollback()
2514
2518
2515 def root(ui, repo):
2519 def root(ui, repo):
2516 """print the root (top) of the current working dir
2520 """print the root (top) of the current working dir
2517
2521
2518 Print the root directory of the current repository.
2522 Print the root directory of the current repository.
2519 """
2523 """
2520 ui.write(repo.root + "\n")
2524 ui.write(repo.root + "\n")
2521
2525
2522 def serve(ui, repo, **opts):
2526 def serve(ui, repo, **opts):
2523 """export the repository via HTTP
2527 """export the repository via HTTP
2524
2528
2525 Start a local HTTP repository browser and pull server.
2529 Start a local HTTP repository browser and pull server.
2526
2530
2527 By default, the server logs accesses to stdout and errors to
2531 By default, the server logs accesses to stdout and errors to
2528 stderr. Use the "-A" and "-E" options to log to files.
2532 stderr. Use the "-A" and "-E" options to log to files.
2529 """
2533 """
2530
2534
2531 if opts["stdio"]:
2535 if opts["stdio"]:
2532 if repo is None:
2536 if repo is None:
2533 raise hg.RepoError(_('no repo found'))
2537 raise hg.RepoError(_('no repo found'))
2534 s = sshserver.sshserver(ui, repo)
2538 s = sshserver.sshserver(ui, repo)
2535 s.serve_forever()
2539 s.serve_forever()
2536
2540
2537 optlist = ("name templates style address port ipv6"
2541 optlist = ("name templates style address port ipv6"
2538 " accesslog errorlog webdir_conf")
2542 " accesslog errorlog webdir_conf")
2539 for o in optlist.split():
2543 for o in optlist.split():
2540 if opts[o]:
2544 if opts[o]:
2541 ui.setconfig("web", o, opts[o])
2545 ui.setconfig("web", o, opts[o])
2542
2546
2543 if repo is None and not ui.config("web", "webdir_conf"):
2547 if repo is None and not ui.config("web", "webdir_conf"):
2544 raise hg.RepoError(_('no repo found'))
2548 raise hg.RepoError(_('no repo found'))
2545
2549
2546 if opts['daemon'] and not opts['daemon_pipefds']:
2550 if opts['daemon'] and not opts['daemon_pipefds']:
2547 rfd, wfd = os.pipe()
2551 rfd, wfd = os.pipe()
2548 args = sys.argv[:]
2552 args = sys.argv[:]
2549 args.append('--daemon-pipefds=%d,%d' % (rfd, wfd))
2553 args.append('--daemon-pipefds=%d,%d' % (rfd, wfd))
2550 pid = os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
2554 pid = os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
2551 args[0], args)
2555 args[0], args)
2552 os.close(wfd)
2556 os.close(wfd)
2553 os.read(rfd, 1)
2557 os.read(rfd, 1)
2554 os._exit(0)
2558 os._exit(0)
2555
2559
2556 try:
2560 try:
2557 httpd = hgweb.server.create_server(ui, repo)
2561 httpd = hgweb.server.create_server(ui, repo)
2558 except socket.error, inst:
2562 except socket.error, inst:
2559 raise util.Abort(_('cannot start server: ') + inst.args[1])
2563 raise util.Abort(_('cannot start server: ') + inst.args[1])
2560
2564
2561 if ui.verbose:
2565 if ui.verbose:
2562 addr, port = httpd.socket.getsockname()
2566 addr, port = httpd.socket.getsockname()
2563 if addr == '0.0.0.0':
2567 if addr == '0.0.0.0':
2564 addr = socket.gethostname()
2568 addr = socket.gethostname()
2565 else:
2569 else:
2566 try:
2570 try:
2567 addr = socket.gethostbyaddr(addr)[0]
2571 addr = socket.gethostbyaddr(addr)[0]
2568 except socket.error:
2572 except socket.error:
2569 pass
2573 pass
2570 if port != 80:
2574 if port != 80:
2571 ui.status(_('listening at http://%s:%d/\n') % (addr, port))
2575 ui.status(_('listening at http://%s:%d/\n') % (addr, port))
2572 else:
2576 else:
2573 ui.status(_('listening at http://%s/\n') % addr)
2577 ui.status(_('listening at http://%s/\n') % addr)
2574
2578
2575 if opts['pid_file']:
2579 if opts['pid_file']:
2576 fp = open(opts['pid_file'], 'w')
2580 fp = open(opts['pid_file'], 'w')
2577 fp.write(str(os.getpid()) + '\n')
2581 fp.write(str(os.getpid()) + '\n')
2578 fp.close()
2582 fp.close()
2579
2583
2580 if opts['daemon_pipefds']:
2584 if opts['daemon_pipefds']:
2581 rfd, wfd = [int(x) for x in opts['daemon_pipefds'].split(',')]
2585 rfd, wfd = [int(x) for x in opts['daemon_pipefds'].split(',')]
2582 os.close(rfd)
2586 os.close(rfd)
2583 os.write(wfd, 'y')
2587 os.write(wfd, 'y')
2584 os.close(wfd)
2588 os.close(wfd)
2585 sys.stdout.flush()
2589 sys.stdout.flush()
2586 sys.stderr.flush()
2590 sys.stderr.flush()
2587 fd = os.open(util.nulldev, os.O_RDWR)
2591 fd = os.open(util.nulldev, os.O_RDWR)
2588 if fd != 0: os.dup2(fd, 0)
2592 if fd != 0: os.dup2(fd, 0)
2589 if fd != 1: os.dup2(fd, 1)
2593 if fd != 1: os.dup2(fd, 1)
2590 if fd != 2: os.dup2(fd, 2)
2594 if fd != 2: os.dup2(fd, 2)
2591 if fd not in (0, 1, 2): os.close(fd)
2595 if fd not in (0, 1, 2): os.close(fd)
2592
2596
2593 httpd.serve_forever()
2597 httpd.serve_forever()
2594
2598
2595 def status(ui, repo, *pats, **opts):
2599 def status(ui, repo, *pats, **opts):
2596 """show changed files in the working directory
2600 """show changed files in the working directory
2597
2601
2598 Show changed files in the repository. If names are
2602 Show changed files in the repository. If names are
2599 given, only files that match are shown.
2603 given, only files that match are shown.
2600
2604
2601 The codes used to show the status of files are:
2605 The codes used to show the status of files are:
2602 M = modified
2606 M = modified
2603 A = added
2607 A = added
2604 R = removed
2608 R = removed
2605 ! = deleted, but still tracked
2609 ! = deleted, but still tracked
2606 ? = not tracked
2610 ? = not tracked
2607 I = ignored (not shown by default)
2611 I = ignored (not shown by default)
2608 """
2612 """
2609
2613
2610 show_ignored = opts['ignored'] and True or False
2614 show_ignored = opts['ignored'] and True or False
2611 files, matchfn, anypats = matchpats(repo, pats, opts)
2615 files, matchfn, anypats = matchpats(repo, pats, opts)
2612 cwd = (pats and repo.getcwd()) or ''
2616 cwd = (pats and repo.getcwd()) or ''
2613 modified, added, removed, deleted, unknown, ignored = [
2617 modified, added, removed, deleted, unknown, ignored = [
2614 [util.pathto(cwd, x) for x in n]
2618 [util.pathto(cwd, x) for x in n]
2615 for n in repo.changes(files=files, match=matchfn,
2619 for n in repo.changes(files=files, match=matchfn,
2616 show_ignored=show_ignored)]
2620 show_ignored=show_ignored)]
2617
2621
2618 changetypes = [('modified', 'M', modified),
2622 changetypes = [('modified', 'M', modified),
2619 ('added', 'A', added),
2623 ('added', 'A', added),
2620 ('removed', 'R', removed),
2624 ('removed', 'R', removed),
2621 ('deleted', '!', deleted),
2625 ('deleted', '!', deleted),
2622 ('unknown', '?', unknown),
2626 ('unknown', '?', unknown),
2623 ('ignored', 'I', ignored)]
2627 ('ignored', 'I', ignored)]
2624
2628
2625 end = opts['print0'] and '\0' or '\n'
2629 end = opts['print0'] and '\0' or '\n'
2626
2630
2627 for opt, char, changes in ([ct for ct in changetypes if opts[ct[0]]]
2631 for opt, char, changes in ([ct for ct in changetypes if opts[ct[0]]]
2628 or changetypes):
2632 or changetypes):
2629 if opts['no_status']:
2633 if opts['no_status']:
2630 format = "%%s%s" % end
2634 format = "%%s%s" % end
2631 else:
2635 else:
2632 format = "%s %%s%s" % (char, end)
2636 format = "%s %%s%s" % (char, end)
2633
2637
2634 for f in changes:
2638 for f in changes:
2635 ui.write(format % f)
2639 ui.write(format % f)
2636
2640
2637 def tag(ui, repo, name, rev_=None, **opts):
2641 def tag(ui, repo, name, rev_=None, **opts):
2638 """add a tag for the current tip or a given revision
2642 """add a tag for the current tip or a given revision
2639
2643
2640 Name a particular revision using <name>.
2644 Name a particular revision using <name>.
2641
2645
2642 Tags are used to name particular revisions of the repository and are
2646 Tags are used to name particular revisions of the repository and are
2643 very useful to compare different revision, to go back to significant
2647 very useful to compare different revision, to go back to significant
2644 earlier versions or to mark branch points as releases, etc.
2648 earlier versions or to mark branch points as releases, etc.
2645
2649
2646 If no revision is given, the tip is used.
2650 If no revision is given, the tip is used.
2647
2651
2648 To facilitate version control, distribution, and merging of tags,
2652 To facilitate version control, distribution, and merging of tags,
2649 they are stored as a file named ".hgtags" which is managed
2653 they are stored as a file named ".hgtags" which is managed
2650 similarly to other project files and can be hand-edited if
2654 similarly to other project files and can be hand-edited if
2651 necessary. The file '.hg/localtags' is used for local tags (not
2655 necessary. The file '.hg/localtags' is used for local tags (not
2652 shared among repositories).
2656 shared among repositories).
2653 """
2657 """
2654 if name == "tip":
2658 if name == "tip":
2655 raise util.Abort(_("the name 'tip' is reserved"))
2659 raise util.Abort(_("the name 'tip' is reserved"))
2656 if rev_ is not None:
2660 if rev_ is not None:
2657 ui.warn(_("use of 'hg tag NAME [REV]' is deprecated, "
2661 ui.warn(_("use of 'hg tag NAME [REV]' is deprecated, "
2658 "please use 'hg tag [-r REV] NAME' instead\n"))
2662 "please use 'hg tag [-r REV] NAME' instead\n"))
2659 if opts['rev']:
2663 if opts['rev']:
2660 raise util.Abort(_("use only one form to specify the revision"))
2664 raise util.Abort(_("use only one form to specify the revision"))
2661 if opts['rev']:
2665 if opts['rev']:
2662 rev_ = opts['rev']
2666 rev_ = opts['rev']
2663 if rev_:
2667 if rev_:
2664 r = hex(repo.lookup(rev_))
2668 r = hex(repo.lookup(rev_))
2665 else:
2669 else:
2666 r = hex(repo.changelog.tip())
2670 r = hex(repo.changelog.tip())
2667
2671
2668 repo.tag(name, r, opts['local'], opts['message'], opts['user'],
2672 repo.tag(name, r, opts['local'], opts['message'], opts['user'],
2669 opts['date'])
2673 opts['date'])
2670
2674
2671 def tags(ui, repo):
2675 def tags(ui, repo):
2672 """list repository tags
2676 """list repository tags
2673
2677
2674 List the repository tags.
2678 List the repository tags.
2675
2679
2676 This lists both regular and local tags.
2680 This lists both regular and local tags.
2677 """
2681 """
2678
2682
2679 l = repo.tagslist()
2683 l = repo.tagslist()
2680 l.reverse()
2684 l.reverse()
2681 for t, n in l:
2685 for t, n in l:
2682 try:
2686 try:
2683 r = "%5d:%s" % (repo.changelog.rev(n), hex(n))
2687 r = "%5d:%s" % (repo.changelog.rev(n), hex(n))
2684 except KeyError:
2688 except KeyError:
2685 r = " ?:?"
2689 r = " ?:?"
2686 if ui.quiet:
2690 if ui.quiet:
2687 ui.write("%s\n" % t)
2691 ui.write("%s\n" % t)
2688 else:
2692 else:
2689 ui.write("%-30s %s\n" % (t, r))
2693 ui.write("%-30s %s\n" % (t, r))
2690
2694
2691 def tip(ui, repo, **opts):
2695 def tip(ui, repo, **opts):
2692 """show the tip revision
2696 """show the tip revision
2693
2697
2694 Show the tip revision.
2698 Show the tip revision.
2695 """
2699 """
2696 n = repo.changelog.tip()
2700 n = repo.changelog.tip()
2697 br = None
2701 br = None
2698 if opts['branches']:
2702 if opts['branches']:
2699 br = repo.branchlookup([n])
2703 br = repo.branchlookup([n])
2700 show_changeset(ui, repo, opts).show(changenode=n, brinfo=br)
2704 show_changeset(ui, repo, opts).show(changenode=n, brinfo=br)
2701 if opts['patch']:
2705 if opts['patch']:
2702 dodiff(ui, ui, repo, repo.changelog.parents(n)[0], n)
2706 dodiff(ui, ui, repo, repo.changelog.parents(n)[0], n)
2703
2707
2704 def unbundle(ui, repo, fname, **opts):
2708 def unbundle(ui, repo, fname, **opts):
2705 """apply a changegroup file
2709 """apply a changegroup file
2706
2710
2707 Apply a compressed changegroup file generated by the bundle
2711 Apply a compressed changegroup file generated by the bundle
2708 command.
2712 command.
2709 """
2713 """
2710 f = urllib.urlopen(fname)
2714 f = urllib.urlopen(fname)
2711
2715
2712 header = f.read(6)
2716 header = f.read(6)
2713 if not header.startswith("HG"):
2717 if not header.startswith("HG"):
2714 raise util.Abort(_("%s: not a Mercurial bundle file") % fname)
2718 raise util.Abort(_("%s: not a Mercurial bundle file") % fname)
2715 elif not header.startswith("HG10"):
2719 elif not header.startswith("HG10"):
2716 raise util.Abort(_("%s: unknown bundle version") % fname)
2720 raise util.Abort(_("%s: unknown bundle version") % fname)
2717 elif header == "HG10BZ":
2721 elif header == "HG10BZ":
2718 def generator(f):
2722 def generator(f):
2719 zd = bz2.BZ2Decompressor()
2723 zd = bz2.BZ2Decompressor()
2720 zd.decompress("BZ")
2724 zd.decompress("BZ")
2721 for chunk in f:
2725 for chunk in f:
2722 yield zd.decompress(chunk)
2726 yield zd.decompress(chunk)
2723 elif header == "HG10UN":
2727 elif header == "HG10UN":
2724 def generator(f):
2728 def generator(f):
2725 for chunk in f:
2729 for chunk in f:
2726 yield chunk
2730 yield chunk
2727 else:
2731 else:
2728 raise util.Abort(_("%s: unknown bundle compression type")
2732 raise util.Abort(_("%s: unknown bundle compression type")
2729 % fname)
2733 % fname)
2730 gen = generator(util.filechunkiter(f, 4096))
2734 gen = generator(util.filechunkiter(f, 4096))
2731 modheads = repo.addchangegroup(util.chunkbuffer(gen), 'unbundle')
2735 modheads = repo.addchangegroup(util.chunkbuffer(gen), 'unbundle')
2732 return postincoming(ui, repo, modheads, opts['update'])
2736 return postincoming(ui, repo, modheads, opts['update'])
2733
2737
2734 def undo(ui, repo):
2738 def undo(ui, repo):
2735 """undo the last commit or pull (DEPRECATED)
2739 """undo the last commit or pull (DEPRECATED)
2736
2740
2737 (DEPRECATED)
2741 (DEPRECATED)
2738 This command is now deprecated and will be removed in a future
2742 This command is now deprecated and will be removed in a future
2739 release. Please use the rollback command instead. For usage
2743 release. Please use the rollback command instead. For usage
2740 instructions, see the rollback command.
2744 instructions, see the rollback command.
2741 """
2745 """
2742 ui.warn(_('(the undo command is deprecated; use rollback instead)\n'))
2746 ui.warn(_('(the undo command is deprecated; use rollback instead)\n'))
2743 repo.rollback()
2747 repo.rollback()
2744
2748
2745 def update(ui, repo, node=None, merge=False, clean=False, force=None,
2749 def update(ui, repo, node=None, merge=False, clean=False, force=None,
2746 branch=None, **opts):
2750 branch=None, **opts):
2747 """update or merge working directory
2751 """update or merge working directory
2748
2752
2749 Update the working directory to the specified revision.
2753 Update the working directory to the specified revision.
2750
2754
2751 If there are no outstanding changes in the working directory and
2755 If there are no outstanding changes in the working directory and
2752 there is a linear relationship between the current version and the
2756 there is a linear relationship between the current version and the
2753 requested version, the result is the requested version.
2757 requested version, the result is the requested version.
2754
2758
2755 To merge the working directory with another revision, use the
2759 To merge the working directory with another revision, use the
2756 merge command.
2760 merge command.
2757
2761
2758 By default, update will refuse to run if doing so would require
2762 By default, update will refuse to run if doing so would require
2759 merging or discarding local changes.
2763 merging or discarding local changes.
2760 """
2764 """
2761 if merge:
2765 if merge:
2762 ui.warn(_('(the -m/--merge option is deprecated; '
2766 ui.warn(_('(the -m/--merge option is deprecated; '
2763 'use the merge command instead)\n'))
2767 'use the merge command instead)\n'))
2764 return doupdate(ui, repo, node, merge, clean, force, branch, **opts)
2768 return doupdate(ui, repo, node, merge, clean, force, branch, **opts)
2765
2769
2766 def doupdate(ui, repo, node=None, merge=False, clean=False, force=None,
2770 def doupdate(ui, repo, node=None, merge=False, clean=False, force=None,
2767 branch=None, **opts):
2771 branch=None, **opts):
2768 if branch:
2772 if branch:
2769 br = repo.branchlookup(branch=branch)
2773 br = repo.branchlookup(branch=branch)
2770 found = []
2774 found = []
2771 for x in br:
2775 for x in br:
2772 if branch in br[x]:
2776 if branch in br[x]:
2773 found.append(x)
2777 found.append(x)
2774 if len(found) > 1:
2778 if len(found) > 1:
2775 ui.warn(_("Found multiple heads for %s\n") % branch)
2779 ui.warn(_("Found multiple heads for %s\n") % branch)
2776 for x in found:
2780 for x in found:
2777 show_changeset(ui, repo, opts).show(changenode=x, brinfo=br)
2781 show_changeset(ui, repo, opts).show(changenode=x, brinfo=br)
2778 return 1
2782 return 1
2779 if len(found) == 1:
2783 if len(found) == 1:
2780 node = found[0]
2784 node = found[0]
2781 ui.warn(_("Using head %s for branch %s\n") % (short(node), branch))
2785 ui.warn(_("Using head %s for branch %s\n") % (short(node), branch))
2782 else:
2786 else:
2783 ui.warn(_("branch %s not found\n") % (branch))
2787 ui.warn(_("branch %s not found\n") % (branch))
2784 return 1
2788 return 1
2785 else:
2789 else:
2786 node = node and repo.lookup(node) or repo.changelog.tip()
2790 node = node and repo.lookup(node) or repo.changelog.tip()
2787 return repo.update(node, allow=merge, force=clean, forcemerge=force)
2791 return repo.update(node, allow=merge, force=clean, forcemerge=force)
2788
2792
2789 def verify(ui, repo):
2793 def verify(ui, repo):
2790 """verify the integrity of the repository
2794 """verify the integrity of the repository
2791
2795
2792 Verify the integrity of the current repository.
2796 Verify the integrity of the current repository.
2793
2797
2794 This will perform an extensive check of the repository's
2798 This will perform an extensive check of the repository's
2795 integrity, validating the hashes and checksums of each entry in
2799 integrity, validating the hashes and checksums of each entry in
2796 the changelog, manifest, and tracked files, as well as the
2800 the changelog, manifest, and tracked files, as well as the
2797 integrity of their crosslinks and indices.
2801 integrity of their crosslinks and indices.
2798 """
2802 """
2799 return repo.verify()
2803 return repo.verify()
2800
2804
2801 # Command options and aliases are listed here, alphabetically
2805 # Command options and aliases are listed here, alphabetically
2802
2806
2803 table = {
2807 table = {
2804 "^add":
2808 "^add":
2805 (add,
2809 (add,
2806 [('I', 'include', [], _('include names matching the given patterns')),
2810 [('I', 'include', [], _('include names matching the given patterns')),
2807 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2811 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2808 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2812 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2809 _('hg add [OPTION]... [FILE]...')),
2813 _('hg add [OPTION]... [FILE]...')),
2810 "debugaddremove|addremove":
2814 "debugaddremove|addremove":
2811 (addremove,
2815 (addremove,
2812 [('I', 'include', [], _('include names matching the given patterns')),
2816 [('I', 'include', [], _('include names matching the given patterns')),
2813 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2817 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2814 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2818 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2815 _('hg addremove [OPTION]... [FILE]...')),
2819 _('hg addremove [OPTION]... [FILE]...')),
2816 "^annotate":
2820 "^annotate":
2817 (annotate,
2821 (annotate,
2818 [('r', 'rev', '', _('annotate the specified revision')),
2822 [('r', 'rev', '', _('annotate the specified revision')),
2819 ('a', 'text', None, _('treat all files as text')),
2823 ('a', 'text', None, _('treat all files as text')),
2820 ('u', 'user', None, _('list the author')),
2824 ('u', 'user', None, _('list the author')),
2821 ('d', 'date', None, _('list the date')),
2825 ('d', 'date', None, _('list the date')),
2822 ('n', 'number', None, _('list the revision number (default)')),
2826 ('n', 'number', None, _('list the revision number (default)')),
2823 ('c', 'changeset', None, _('list the changeset')),
2827 ('c', 'changeset', None, _('list the changeset')),
2824 ('I', 'include', [], _('include names matching the given patterns')),
2828 ('I', 'include', [], _('include names matching the given patterns')),
2825 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2829 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2826 _('hg annotate [-r REV] [-a] [-u] [-d] [-n] [-c] FILE...')),
2830 _('hg annotate [-r REV] [-a] [-u] [-d] [-n] [-c] FILE...')),
2827 "archive":
2831 "archive":
2828 (archive,
2832 (archive,
2829 [('', 'no-decode', None, _('do not pass files through decoders')),
2833 [('', 'no-decode', None, _('do not pass files through decoders')),
2830 ('p', 'prefix', '', _('directory prefix for files in archive')),
2834 ('p', 'prefix', '', _('directory prefix for files in archive')),
2831 ('r', 'rev', '', _('revision to distribute')),
2835 ('r', 'rev', '', _('revision to distribute')),
2832 ('t', 'type', '', _('type of distribution to create')),
2836 ('t', 'type', '', _('type of distribution to create')),
2833 ('I', 'include', [], _('include names matching the given patterns')),
2837 ('I', 'include', [], _('include names matching the given patterns')),
2834 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2838 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2835 _('hg archive [OPTION]... DEST')),
2839 _('hg archive [OPTION]... DEST')),
2836 "backout":
2840 "backout":
2837 (backout,
2841 (backout,
2838 [('', 'merge', None,
2842 [('', 'merge', None,
2839 _('merge with old dirstate parent after backout')),
2843 _('merge with old dirstate parent after backout')),
2840 ('m', 'message', '', _('use <text> as commit message')),
2844 ('m', 'message', '', _('use <text> as commit message')),
2841 ('l', 'logfile', '', _('read commit message from <file>')),
2845 ('l', 'logfile', '', _('read commit message from <file>')),
2842 ('d', 'date', '', _('record datecode as commit date')),
2846 ('d', 'date', '', _('record datecode as commit date')),
2843 ('', 'parent', '', _('parent to choose when backing out merge')),
2847 ('', 'parent', '', _('parent to choose when backing out merge')),
2844 ('u', 'user', '', _('record user as committer')),
2848 ('u', 'user', '', _('record user as committer')),
2845 ('I', 'include', [], _('include names matching the given patterns')),
2849 ('I', 'include', [], _('include names matching the given patterns')),
2846 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2850 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2847 _('hg backout [OPTION]... REV')),
2851 _('hg backout [OPTION]... REV')),
2848 "bundle":
2852 "bundle":
2849 (bundle,
2853 (bundle,
2850 [('f', 'force', None,
2854 [('f', 'force', None,
2851 _('run even when remote repository is unrelated'))],
2855 _('run even when remote repository is unrelated'))],
2852 _('hg bundle FILE DEST')),
2856 _('hg bundle FILE DEST')),
2853 "cat":
2857 "cat":
2854 (cat,
2858 (cat,
2855 [('o', 'output', '', _('print output to file with formatted name')),
2859 [('o', 'output', '', _('print output to file with formatted name')),
2856 ('r', 'rev', '', _('print the given revision')),
2860 ('r', 'rev', '', _('print the given revision')),
2857 ('I', 'include', [], _('include names matching the given patterns')),
2861 ('I', 'include', [], _('include names matching the given patterns')),
2858 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2862 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2859 _('hg cat [OPTION]... FILE...')),
2863 _('hg cat [OPTION]... FILE...')),
2860 "^clone":
2864 "^clone":
2861 (clone,
2865 (clone,
2862 [('U', 'noupdate', None, _('do not update the new working directory')),
2866 [('U', 'noupdate', None, _('do not update the new working directory')),
2863 ('r', 'rev', [],
2867 ('r', 'rev', [],
2864 _('a changeset you would like to have after cloning')),
2868 _('a changeset you would like to have after cloning')),
2865 ('', 'pull', None, _('use pull protocol to copy metadata')),
2869 ('', 'pull', None, _('use pull protocol to copy metadata')),
2866 ('', 'stream', None, _('use streaming protocol (fast over LAN)')),
2870 ('', 'uncompressed', None,
2871 _('use uncompressed transfer (fast over LAN)')),
2867 ('e', 'ssh', '', _('specify ssh command to use')),
2872 ('e', 'ssh', '', _('specify ssh command to use')),
2868 ('', 'remotecmd', '',
2873 ('', 'remotecmd', '',
2869 _('specify hg command to run on the remote side'))],
2874 _('specify hg command to run on the remote side'))],
2870 _('hg clone [OPTION]... SOURCE [DEST]')),
2875 _('hg clone [OPTION]... SOURCE [DEST]')),
2871 "^commit|ci":
2876 "^commit|ci":
2872 (commit,
2877 (commit,
2873 [('A', 'addremove', None,
2878 [('A', 'addremove', None,
2874 _('mark new/missing files as added/removed before committing')),
2879 _('mark new/missing files as added/removed before committing')),
2875 ('m', 'message', '', _('use <text> as commit message')),
2880 ('m', 'message', '', _('use <text> as commit message')),
2876 ('l', 'logfile', '', _('read the commit message from <file>')),
2881 ('l', 'logfile', '', _('read the commit message from <file>')),
2877 ('d', 'date', '', _('record datecode as commit date')),
2882 ('d', 'date', '', _('record datecode as commit date')),
2878 ('u', 'user', '', _('record user as commiter')),
2883 ('u', 'user', '', _('record user as commiter')),
2879 ('I', 'include', [], _('include names matching the given patterns')),
2884 ('I', 'include', [], _('include names matching the given patterns')),
2880 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2885 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2881 _('hg commit [OPTION]... [FILE]...')),
2886 _('hg commit [OPTION]... [FILE]...')),
2882 "copy|cp":
2887 "copy|cp":
2883 (copy,
2888 (copy,
2884 [('A', 'after', None, _('record a copy that has already occurred')),
2889 [('A', 'after', None, _('record a copy that has already occurred')),
2885 ('f', 'force', None,
2890 ('f', 'force', None,
2886 _('forcibly copy over an existing managed file')),
2891 _('forcibly copy over an existing managed file')),
2887 ('I', 'include', [], _('include names matching the given patterns')),
2892 ('I', 'include', [], _('include names matching the given patterns')),
2888 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2893 ('X', 'exclude', [], _('exclude names matching the given patterns')),
2889 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2894 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
2890 _('hg copy [OPTION]... [SOURCE]... DEST')),
2895 _('hg copy [OPTION]... [SOURCE]... DEST')),
2891 "debugancestor": (debugancestor, [], _('debugancestor INDEX REV1 REV2')),
2896 "debugancestor": (debugancestor, [], _('debugancestor INDEX REV1 REV2')),
2892 "debugcomplete":
2897 "debugcomplete":
2893 (debugcomplete,
2898 (debugcomplete,
2894 [('o', 'options', None, _('show the command options'))],
2899 [('o', 'options', None, _('show the command options'))],
2895 _('debugcomplete [-o] CMD')),
2900 _('debugcomplete [-o] CMD')),
2896 "debugrebuildstate":
2901 "debugrebuildstate":
2897 (debugrebuildstate,
2902 (debugrebuildstate,
2898 [('r', 'rev', '', _('revision to rebuild to'))],
2903 [('r', 'rev', '', _('revision to rebuild to'))],
2899 _('debugrebuildstate [-r REV] [REV]')),
2904 _('debugrebuildstate [-r REV] [REV]')),
2900 "debugcheckstate": (debugcheckstate, [], _('debugcheckstate')),
2905 "debugcheckstate": (debugcheckstate, [], _('debugcheckstate')),
2901 "debugconfig": (debugconfig, [], _('debugconfig [NAME]...')),
2906 "debugconfig": (debugconfig, [], _('debugconfig [NAME]...')),
2902 "debugsetparents": (debugsetparents, [], _('debugsetparents REV1 [REV2]')),
2907 "debugsetparents": (debugsetparents, [], _('debugsetparents REV1 [REV2]')),
2903 "debugstate": (debugstate, [], _('debugstate')),
2908 "debugstate": (debugstate, [], _('debugstate')),
2904 "debugdata": (debugdata, [], _('debugdata FILE REV')),
2909 "debugdata": (debugdata, [], _('debugdata FILE REV')),
2905 "debugindex": (debugindex, [], _('debugindex FILE')),
2910 "debugindex": (debugindex, [], _('debugindex FILE')),
2906 "debugindexdot": (debugindexdot, [], _('debugindexdot FILE')),
2911 "debugindexdot": (debugindexdot, [], _('debugindexdot FILE')),
2907 "debugrename": (debugrename, [], _('debugrename FILE [REV]')),
2912 "debugrename": (debugrename, [], _('debugrename FILE [REV]')),
2908 "debugwalk":
2913 "debugwalk":
2909 (debugwalk,
2914 (debugwalk,
2910 [('I', 'include', [], _('include names matching the given patterns')),
2915 [('I', 'include', [], _('include names matching the given patterns')),
2911 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2916 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2912 _('debugwalk [OPTION]... [FILE]...')),
2917 _('debugwalk [OPTION]... [FILE]...')),
2913 "^diff":
2918 "^diff":
2914 (diff,
2919 (diff,
2915 [('r', 'rev', [], _('revision')),
2920 [('r', 'rev', [], _('revision')),
2916 ('a', 'text', None, _('treat all files as text')),
2921 ('a', 'text', None, _('treat all files as text')),
2917 ('p', 'show-function', None,
2922 ('p', 'show-function', None,
2918 _('show which function each change is in')),
2923 _('show which function each change is in')),
2919 ('w', 'ignore-all-space', None,
2924 ('w', 'ignore-all-space', None,
2920 _('ignore white space when comparing lines')),
2925 _('ignore white space when comparing lines')),
2921 ('b', 'ignore-space-change', None,
2926 ('b', 'ignore-space-change', None,
2922 _('ignore changes in the amount of white space')),
2927 _('ignore changes in the amount of white space')),
2923 ('B', 'ignore-blank-lines', None,
2928 ('B', 'ignore-blank-lines', None,
2924 _('ignore changes whose lines are all blank')),
2929 _('ignore changes whose lines are all blank')),
2925 ('I', 'include', [], _('include names matching the given patterns')),
2930 ('I', 'include', [], _('include names matching the given patterns')),
2926 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2931 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2927 _('hg diff [-a] [-I] [-X] [-r REV1 [-r REV2]] [FILE]...')),
2932 _('hg diff [-a] [-I] [-X] [-r REV1 [-r REV2]] [FILE]...')),
2928 "^export":
2933 "^export":
2929 (export,
2934 (export,
2930 [('o', 'output', '', _('print output to file with formatted name')),
2935 [('o', 'output', '', _('print output to file with formatted name')),
2931 ('a', 'text', None, _('treat all files as text')),
2936 ('a', 'text', None, _('treat all files as text')),
2932 ('', 'switch-parent', None, _('diff against the second parent'))],
2937 ('', 'switch-parent', None, _('diff against the second parent'))],
2933 _('hg export [-a] [-o OUTFILESPEC] REV...')),
2938 _('hg export [-a] [-o OUTFILESPEC] REV...')),
2934 "debugforget|forget":
2939 "debugforget|forget":
2935 (forget,
2940 (forget,
2936 [('I', 'include', [], _('include names matching the given patterns')),
2941 [('I', 'include', [], _('include names matching the given patterns')),
2937 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2942 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2938 _('hg forget [OPTION]... FILE...')),
2943 _('hg forget [OPTION]... FILE...')),
2939 "grep":
2944 "grep":
2940 (grep,
2945 (grep,
2941 [('0', 'print0', None, _('end fields with NUL')),
2946 [('0', 'print0', None, _('end fields with NUL')),
2942 ('', 'all', None, _('print all revisions that match')),
2947 ('', 'all', None, _('print all revisions that match')),
2943 ('i', 'ignore-case', None, _('ignore case when matching')),
2948 ('i', 'ignore-case', None, _('ignore case when matching')),
2944 ('l', 'files-with-matches', None,
2949 ('l', 'files-with-matches', None,
2945 _('print only filenames and revs that match')),
2950 _('print only filenames and revs that match')),
2946 ('n', 'line-number', None, _('print matching line numbers')),
2951 ('n', 'line-number', None, _('print matching line numbers')),
2947 ('r', 'rev', [], _('search in given revision range')),
2952 ('r', 'rev', [], _('search in given revision range')),
2948 ('u', 'user', None, _('print user who committed change')),
2953 ('u', 'user', None, _('print user who committed change')),
2949 ('I', 'include', [], _('include names matching the given patterns')),
2954 ('I', 'include', [], _('include names matching the given patterns')),
2950 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2955 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
2951 _('hg grep [OPTION]... PATTERN [FILE]...')),
2956 _('hg grep [OPTION]... PATTERN [FILE]...')),
2952 "heads":
2957 "heads":
2953 (heads,
2958 (heads,
2954 [('b', 'branches', None, _('show branches')),
2959 [('b', 'branches', None, _('show branches')),
2955 ('', 'style', '', _('display using template map file')),
2960 ('', 'style', '', _('display using template map file')),
2956 ('r', 'rev', '', _('show only heads which are descendants of rev')),
2961 ('r', 'rev', '', _('show only heads which are descendants of rev')),
2957 ('', 'template', '', _('display with template'))],
2962 ('', 'template', '', _('display with template'))],
2958 _('hg heads [-b] [-r <rev>]')),
2963 _('hg heads [-b] [-r <rev>]')),
2959 "help": (help_, [], _('hg help [COMMAND]')),
2964 "help": (help_, [], _('hg help [COMMAND]')),
2960 "identify|id": (identify, [], _('hg identify')),
2965 "identify|id": (identify, [], _('hg identify')),
2961 "import|patch":
2966 "import|patch":
2962 (import_,
2967 (import_,
2963 [('p', 'strip', 1,
2968 [('p', 'strip', 1,
2964 _('directory strip option for patch. This has the same\n'
2969 _('directory strip option for patch. This has the same\n'
2965 'meaning as the corresponding patch option')),
2970 'meaning as the corresponding patch option')),
2966 ('m', 'message', '', _('use <text> as commit message')),
2971 ('m', 'message', '', _('use <text> as commit message')),
2967 ('b', 'base', '', _('base path')),
2972 ('b', 'base', '', _('base path')),
2968 ('f', 'force', None,
2973 ('f', 'force', None,
2969 _('skip check for outstanding uncommitted changes'))],
2974 _('skip check for outstanding uncommitted changes'))],
2970 _('hg import [-p NUM] [-b BASE] [-m MESSAGE] [-f] PATCH...')),
2975 _('hg import [-p NUM] [-b BASE] [-m MESSAGE] [-f] PATCH...')),
2971 "incoming|in": (incoming,
2976 "incoming|in": (incoming,
2972 [('M', 'no-merges', None, _('do not show merges')),
2977 [('M', 'no-merges', None, _('do not show merges')),
2973 ('f', 'force', None,
2978 ('f', 'force', None,
2974 _('run even when remote repository is unrelated')),
2979 _('run even when remote repository is unrelated')),
2975 ('', 'style', '', _('display using template map file')),
2980 ('', 'style', '', _('display using template map file')),
2976 ('n', 'newest-first', None, _('show newest record first')),
2981 ('n', 'newest-first', None, _('show newest record first')),
2977 ('', 'bundle', '', _('file to store the bundles into')),
2982 ('', 'bundle', '', _('file to store the bundles into')),
2978 ('p', 'patch', None, _('show patch')),
2983 ('p', 'patch', None, _('show patch')),
2979 ('r', 'rev', [], _('a specific revision you would like to pull')),
2984 ('r', 'rev', [], _('a specific revision you would like to pull')),
2980 ('', 'template', '', _('display with template')),
2985 ('', 'template', '', _('display with template')),
2981 ('e', 'ssh', '', _('specify ssh command to use')),
2986 ('e', 'ssh', '', _('specify ssh command to use')),
2982 ('', 'remotecmd', '',
2987 ('', 'remotecmd', '',
2983 _('specify hg command to run on the remote side'))],
2988 _('specify hg command to run on the remote side'))],
2984 _('hg incoming [-p] [-n] [-M] [-r REV]...'
2989 _('hg incoming [-p] [-n] [-M] [-r REV]...'
2985 ' [--bundle FILENAME] [SOURCE]')),
2990 ' [--bundle FILENAME] [SOURCE]')),
2986 "^init":
2991 "^init":
2987 (init,
2992 (init,
2988 [('e', 'ssh', '', _('specify ssh command to use')),
2993 [('e', 'ssh', '', _('specify ssh command to use')),
2989 ('', 'remotecmd', '',
2994 ('', 'remotecmd', '',
2990 _('specify hg command to run on the remote side'))],
2995 _('specify hg command to run on the remote side'))],
2991 _('hg init [-e FILE] [--remotecmd FILE] [DEST]')),
2996 _('hg init [-e FILE] [--remotecmd FILE] [DEST]')),
2992 "locate":
2997 "locate":
2993 (locate,
2998 (locate,
2994 [('r', 'rev', '', _('search the repository as it stood at rev')),
2999 [('r', 'rev', '', _('search the repository as it stood at rev')),
2995 ('0', 'print0', None,
3000 ('0', 'print0', None,
2996 _('end filenames with NUL, for use with xargs')),
3001 _('end filenames with NUL, for use with xargs')),
2997 ('f', 'fullpath', None,
3002 ('f', 'fullpath', None,
2998 _('print complete paths from the filesystem root')),
3003 _('print complete paths from the filesystem root')),
2999 ('I', 'include', [], _('include names matching the given patterns')),
3004 ('I', 'include', [], _('include names matching the given patterns')),
3000 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3005 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3001 _('hg locate [OPTION]... [PATTERN]...')),
3006 _('hg locate [OPTION]... [PATTERN]...')),
3002 "^log|history":
3007 "^log|history":
3003 (log,
3008 (log,
3004 [('b', 'branches', None, _('show branches')),
3009 [('b', 'branches', None, _('show branches')),
3005 ('k', 'keyword', [], _('search for a keyword')),
3010 ('k', 'keyword', [], _('search for a keyword')),
3006 ('l', 'limit', '', _('limit number of changes displayed')),
3011 ('l', 'limit', '', _('limit number of changes displayed')),
3007 ('r', 'rev', [], _('show the specified revision or range')),
3012 ('r', 'rev', [], _('show the specified revision or range')),
3008 ('M', 'no-merges', None, _('do not show merges')),
3013 ('M', 'no-merges', None, _('do not show merges')),
3009 ('', 'style', '', _('display using template map file')),
3014 ('', 'style', '', _('display using template map file')),
3010 ('m', 'only-merges', None, _('show only merges')),
3015 ('m', 'only-merges', None, _('show only merges')),
3011 ('p', 'patch', None, _('show patch')),
3016 ('p', 'patch', None, _('show patch')),
3012 ('', 'template', '', _('display with template')),
3017 ('', 'template', '', _('display with template')),
3013 ('I', 'include', [], _('include names matching the given patterns')),
3018 ('I', 'include', [], _('include names matching the given patterns')),
3014 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3019 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3015 _('hg log [OPTION]... [FILE]')),
3020 _('hg log [OPTION]... [FILE]')),
3016 "manifest": (manifest, [], _('hg manifest [REV]')),
3021 "manifest": (manifest, [], _('hg manifest [REV]')),
3017 "merge":
3022 "merge":
3018 (merge,
3023 (merge,
3019 [('b', 'branch', '', _('merge with head of a specific branch')),
3024 [('b', 'branch', '', _('merge with head of a specific branch')),
3020 ('f', 'force', None, _('force a merge with outstanding changes'))],
3025 ('f', 'force', None, _('force a merge with outstanding changes'))],
3021 _('hg merge [-b TAG] [-f] [REV]')),
3026 _('hg merge [-b TAG] [-f] [REV]')),
3022 "outgoing|out": (outgoing,
3027 "outgoing|out": (outgoing,
3023 [('M', 'no-merges', None, _('do not show merges')),
3028 [('M', 'no-merges', None, _('do not show merges')),
3024 ('f', 'force', None,
3029 ('f', 'force', None,
3025 _('run even when remote repository is unrelated')),
3030 _('run even when remote repository is unrelated')),
3026 ('p', 'patch', None, _('show patch')),
3031 ('p', 'patch', None, _('show patch')),
3027 ('', 'style', '', _('display using template map file')),
3032 ('', 'style', '', _('display using template map file')),
3028 ('r', 'rev', [], _('a specific revision you would like to push')),
3033 ('r', 'rev', [], _('a specific revision you would like to push')),
3029 ('n', 'newest-first', None, _('show newest record first')),
3034 ('n', 'newest-first', None, _('show newest record first')),
3030 ('', 'template', '', _('display with template')),
3035 ('', 'template', '', _('display with template')),
3031 ('e', 'ssh', '', _('specify ssh command to use')),
3036 ('e', 'ssh', '', _('specify ssh command to use')),
3032 ('', 'remotecmd', '',
3037 ('', 'remotecmd', '',
3033 _('specify hg command to run on the remote side'))],
3038 _('specify hg command to run on the remote side'))],
3034 _('hg outgoing [-M] [-p] [-n] [-r REV]... [DEST]')),
3039 _('hg outgoing [-M] [-p] [-n] [-r REV]... [DEST]')),
3035 "^parents":
3040 "^parents":
3036 (parents,
3041 (parents,
3037 [('b', 'branches', None, _('show branches')),
3042 [('b', 'branches', None, _('show branches')),
3038 ('', 'style', '', _('display using template map file')),
3043 ('', 'style', '', _('display using template map file')),
3039 ('', 'template', '', _('display with template'))],
3044 ('', 'template', '', _('display with template'))],
3040 _('hg parents [-b] [REV]')),
3045 _('hg parents [-b] [REV]')),
3041 "paths": (paths, [], _('hg paths [NAME]')),
3046 "paths": (paths, [], _('hg paths [NAME]')),
3042 "^pull":
3047 "^pull":
3043 (pull,
3048 (pull,
3044 [('u', 'update', None,
3049 [('u', 'update', None,
3045 _('update the working directory to tip after pull')),
3050 _('update the working directory to tip after pull')),
3046 ('e', 'ssh', '', _('specify ssh command to use')),
3051 ('e', 'ssh', '', _('specify ssh command to use')),
3047 ('f', 'force', None,
3052 ('f', 'force', None,
3048 _('run even when remote repository is unrelated')),
3053 _('run even when remote repository is unrelated')),
3049 ('r', 'rev', [], _('a specific revision you would like to pull')),
3054 ('r', 'rev', [], _('a specific revision you would like to pull')),
3050 ('', 'remotecmd', '',
3055 ('', 'remotecmd', '',
3051 _('specify hg command to run on the remote side'))],
3056 _('specify hg command to run on the remote side'))],
3052 _('hg pull [-u] [-r REV]... [-e FILE] [--remotecmd FILE] [SOURCE]')),
3057 _('hg pull [-u] [-r REV]... [-e FILE] [--remotecmd FILE] [SOURCE]')),
3053 "^push":
3058 "^push":
3054 (push,
3059 (push,
3055 [('f', 'force', None, _('force push')),
3060 [('f', 'force', None, _('force push')),
3056 ('e', 'ssh', '', _('specify ssh command to use')),
3061 ('e', 'ssh', '', _('specify ssh command to use')),
3057 ('r', 'rev', [], _('a specific revision you would like to push')),
3062 ('r', 'rev', [], _('a specific revision you would like to push')),
3058 ('', 'remotecmd', '',
3063 ('', 'remotecmd', '',
3059 _('specify hg command to run on the remote side'))],
3064 _('specify hg command to run on the remote side'))],
3060 _('hg push [-f] [-r REV]... [-e FILE] [--remotecmd FILE] [DEST]')),
3065 _('hg push [-f] [-r REV]... [-e FILE] [--remotecmd FILE] [DEST]')),
3061 "debugrawcommit|rawcommit":
3066 "debugrawcommit|rawcommit":
3062 (rawcommit,
3067 (rawcommit,
3063 [('p', 'parent', [], _('parent')),
3068 [('p', 'parent', [], _('parent')),
3064 ('d', 'date', '', _('date code')),
3069 ('d', 'date', '', _('date code')),
3065 ('u', 'user', '', _('user')),
3070 ('u', 'user', '', _('user')),
3066 ('F', 'files', '', _('file list')),
3071 ('F', 'files', '', _('file list')),
3067 ('m', 'message', '', _('commit message')),
3072 ('m', 'message', '', _('commit message')),
3068 ('l', 'logfile', '', _('commit message file'))],
3073 ('l', 'logfile', '', _('commit message file'))],
3069 _('hg debugrawcommit [OPTION]... [FILE]...')),
3074 _('hg debugrawcommit [OPTION]... [FILE]...')),
3070 "recover": (recover, [], _('hg recover')),
3075 "recover": (recover, [], _('hg recover')),
3071 "^remove|rm":
3076 "^remove|rm":
3072 (remove,
3077 (remove,
3073 [('A', 'after', None, _('record remove that has already occurred')),
3078 [('A', 'after', None, _('record remove that has already occurred')),
3074 ('f', 'force', None, _('remove file even if modified')),
3079 ('f', 'force', None, _('remove file even if modified')),
3075 ('I', 'include', [], _('include names matching the given patterns')),
3080 ('I', 'include', [], _('include names matching the given patterns')),
3076 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3081 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3077 _('hg remove [OPTION]... FILE...')),
3082 _('hg remove [OPTION]... FILE...')),
3078 "rename|mv":
3083 "rename|mv":
3079 (rename,
3084 (rename,
3080 [('A', 'after', None, _('record a rename that has already occurred')),
3085 [('A', 'after', None, _('record a rename that has already occurred')),
3081 ('f', 'force', None,
3086 ('f', 'force', None,
3082 _('forcibly copy over an existing managed file')),
3087 _('forcibly copy over an existing managed file')),
3083 ('I', 'include', [], _('include names matching the given patterns')),
3088 ('I', 'include', [], _('include names matching the given patterns')),
3084 ('X', 'exclude', [], _('exclude names matching the given patterns')),
3089 ('X', 'exclude', [], _('exclude names matching the given patterns')),
3085 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
3090 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
3086 _('hg rename [OPTION]... SOURCE... DEST')),
3091 _('hg rename [OPTION]... SOURCE... DEST')),
3087 "^revert":
3092 "^revert":
3088 (revert,
3093 (revert,
3089 [('r', 'rev', '', _('revision to revert to')),
3094 [('r', 'rev', '', _('revision to revert to')),
3090 ('', 'no-backup', None, _('do not save backup copies of files')),
3095 ('', 'no-backup', None, _('do not save backup copies of files')),
3091 ('I', 'include', [], _('include names matching given patterns')),
3096 ('I', 'include', [], _('include names matching given patterns')),
3092 ('X', 'exclude', [], _('exclude names matching given patterns')),
3097 ('X', 'exclude', [], _('exclude names matching given patterns')),
3093 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
3098 ('n', 'dry-run', None, _('do not perform actions, just print output'))],
3094 _('hg revert [-r REV] [NAME]...')),
3099 _('hg revert [-r REV] [NAME]...')),
3095 "rollback": (rollback, [], _('hg rollback')),
3100 "rollback": (rollback, [], _('hg rollback')),
3096 "root": (root, [], _('hg root')),
3101 "root": (root, [], _('hg root')),
3097 "^serve":
3102 "^serve":
3098 (serve,
3103 (serve,
3099 [('A', 'accesslog', '', _('name of access log file to write to')),
3104 [('A', 'accesslog', '', _('name of access log file to write to')),
3100 ('d', 'daemon', None, _('run server in background')),
3105 ('d', 'daemon', None, _('run server in background')),
3101 ('', 'daemon-pipefds', '', _('used internally by daemon mode')),
3106 ('', 'daemon-pipefds', '', _('used internally by daemon mode')),
3102 ('E', 'errorlog', '', _('name of error log file to write to')),
3107 ('E', 'errorlog', '', _('name of error log file to write to')),
3103 ('p', 'port', 0, _('port to use (default: 8000)')),
3108 ('p', 'port', 0, _('port to use (default: 8000)')),
3104 ('a', 'address', '', _('address to use')),
3109 ('a', 'address', '', _('address to use')),
3105 ('n', 'name', '',
3110 ('n', 'name', '',
3106 _('name to show in web pages (default: working dir)')),
3111 _('name to show in web pages (default: working dir)')),
3107 ('', 'webdir-conf', '', _('name of the webdir config file'
3112 ('', 'webdir-conf', '', _('name of the webdir config file'
3108 ' (serve more than one repo)')),
3113 ' (serve more than one repo)')),
3109 ('', 'pid-file', '', _('name of file to write process ID to')),
3114 ('', 'pid-file', '', _('name of file to write process ID to')),
3110 ('', 'stdio', None, _('for remote clients')),
3115 ('', 'stdio', None, _('for remote clients')),
3111 ('t', 'templates', '', _('web templates to use')),
3116 ('t', 'templates', '', _('web templates to use')),
3112 ('', 'style', '', _('template style to use')),
3117 ('', 'style', '', _('template style to use')),
3113 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4'))],
3118 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4'))],
3114 _('hg serve [OPTION]...')),
3119 _('hg serve [OPTION]...')),
3115 "^status|st":
3120 "^status|st":
3116 (status,
3121 (status,
3117 [('m', 'modified', None, _('show only modified files')),
3122 [('m', 'modified', None, _('show only modified files')),
3118 ('a', 'added', None, _('show only added files')),
3123 ('a', 'added', None, _('show only added files')),
3119 ('r', 'removed', None, _('show only removed files')),
3124 ('r', 'removed', None, _('show only removed files')),
3120 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
3125 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
3121 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
3126 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
3122 ('i', 'ignored', None, _('show ignored files')),
3127 ('i', 'ignored', None, _('show ignored files')),
3123 ('n', 'no-status', None, _('hide status prefix')),
3128 ('n', 'no-status', None, _('hide status prefix')),
3124 ('0', 'print0', None,
3129 ('0', 'print0', None,
3125 _('end filenames with NUL, for use with xargs')),
3130 _('end filenames with NUL, for use with xargs')),
3126 ('I', 'include', [], _('include names matching the given patterns')),
3131 ('I', 'include', [], _('include names matching the given patterns')),
3127 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3132 ('X', 'exclude', [], _('exclude names matching the given patterns'))],
3128 _('hg status [OPTION]... [FILE]...')),
3133 _('hg status [OPTION]... [FILE]...')),
3129 "tag":
3134 "tag":
3130 (tag,
3135 (tag,
3131 [('l', 'local', None, _('make the tag local')),
3136 [('l', 'local', None, _('make the tag local')),
3132 ('m', 'message', '', _('message for tag commit log entry')),
3137 ('m', 'message', '', _('message for tag commit log entry')),
3133 ('d', 'date', '', _('record datecode as commit date')),
3138 ('d', 'date', '', _('record datecode as commit date')),
3134 ('u', 'user', '', _('record user as commiter')),
3139 ('u', 'user', '', _('record user as commiter')),
3135 ('r', 'rev', '', _('revision to tag'))],
3140 ('r', 'rev', '', _('revision to tag'))],
3136 _('hg tag [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME')),
3141 _('hg tag [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME')),
3137 "tags": (tags, [], _('hg tags')),
3142 "tags": (tags, [], _('hg tags')),
3138 "tip":
3143 "tip":
3139 (tip,
3144 (tip,
3140 [('b', 'branches', None, _('show branches')),
3145 [('b', 'branches', None, _('show branches')),
3141 ('', 'style', '', _('display using template map file')),
3146 ('', 'style', '', _('display using template map file')),
3142 ('p', 'patch', None, _('show patch')),
3147 ('p', 'patch', None, _('show patch')),
3143 ('', 'template', '', _('display with template'))],
3148 ('', 'template', '', _('display with template'))],
3144 _('hg tip [-b] [-p]')),
3149 _('hg tip [-b] [-p]')),
3145 "unbundle":
3150 "unbundle":
3146 (unbundle,
3151 (unbundle,
3147 [('u', 'update', None,
3152 [('u', 'update', None,
3148 _('update the working directory to tip after unbundle'))],
3153 _('update the working directory to tip after unbundle'))],
3149 _('hg unbundle [-u] FILE')),
3154 _('hg unbundle [-u] FILE')),
3150 "debugundo|undo": (undo, [], _('hg undo')),
3155 "debugundo|undo": (undo, [], _('hg undo')),
3151 "^update|up|checkout|co":
3156 "^update|up|checkout|co":
3152 (update,
3157 (update,
3153 [('b', 'branch', '', _('checkout the head of a specific branch')),
3158 [('b', 'branch', '', _('checkout the head of a specific branch')),
3154 ('m', 'merge', None, _('allow merging of branches (DEPRECATED)')),
3159 ('m', 'merge', None, _('allow merging of branches (DEPRECATED)')),
3155 ('C', 'clean', None, _('overwrite locally modified files')),
3160 ('C', 'clean', None, _('overwrite locally modified files')),
3156 ('f', 'force', None, _('force a merge with outstanding changes'))],
3161 ('f', 'force', None, _('force a merge with outstanding changes'))],
3157 _('hg update [-b TAG] [-m] [-C] [-f] [REV]')),
3162 _('hg update [-b TAG] [-m] [-C] [-f] [REV]')),
3158 "verify": (verify, [], _('hg verify')),
3163 "verify": (verify, [], _('hg verify')),
3159 "version": (show_version, [], _('hg version')),
3164 "version": (show_version, [], _('hg version')),
3160 }
3165 }
3161
3166
3162 globalopts = [
3167 globalopts = [
3163 ('R', 'repository', '',
3168 ('R', 'repository', '',
3164 _('repository root directory or symbolic path name')),
3169 _('repository root directory or symbolic path name')),
3165 ('', 'cwd', '', _('change working directory')),
3170 ('', 'cwd', '', _('change working directory')),
3166 ('y', 'noninteractive', None,
3171 ('y', 'noninteractive', None,
3167 _('do not prompt, assume \'yes\' for any required answers')),
3172 _('do not prompt, assume \'yes\' for any required answers')),
3168 ('q', 'quiet', None, _('suppress output')),
3173 ('q', 'quiet', None, _('suppress output')),
3169 ('v', 'verbose', None, _('enable additional output')),
3174 ('v', 'verbose', None, _('enable additional output')),
3170 ('', 'config', [], _('set/override config option')),
3175 ('', 'config', [], _('set/override config option')),
3171 ('', 'debug', None, _('enable debugging output')),
3176 ('', 'debug', None, _('enable debugging output')),
3172 ('', 'debugger', None, _('start debugger')),
3177 ('', 'debugger', None, _('start debugger')),
3173 ('', 'lsprof', None, _('print improved command execution profile')),
3178 ('', 'lsprof', None, _('print improved command execution profile')),
3174 ('', 'traceback', None, _('print traceback on exception')),
3179 ('', 'traceback', None, _('print traceback on exception')),
3175 ('', 'time', None, _('time how long the command takes')),
3180 ('', 'time', None, _('time how long the command takes')),
3176 ('', 'profile', None, _('print command execution profile')),
3181 ('', 'profile', None, _('print command execution profile')),
3177 ('', 'version', None, _('output version information and exit')),
3182 ('', 'version', None, _('output version information and exit')),
3178 ('h', 'help', None, _('display help and exit')),
3183 ('h', 'help', None, _('display help and exit')),
3179 ]
3184 ]
3180
3185
3181 norepo = ("clone init version help debugancestor debugcomplete debugdata"
3186 norepo = ("clone init version help debugancestor debugcomplete debugdata"
3182 " debugindex debugindexdot")
3187 " debugindex debugindexdot")
3183 optionalrepo = ("paths serve debugconfig")
3188 optionalrepo = ("paths serve debugconfig")
3184
3189
3185 def findpossible(cmd):
3190 def findpossible(cmd):
3186 """
3191 """
3187 Return cmd -> (aliases, command table entry)
3192 Return cmd -> (aliases, command table entry)
3188 for each matching command.
3193 for each matching command.
3189 Return debug commands (or their aliases) only if no normal command matches.
3194 Return debug commands (or their aliases) only if no normal command matches.
3190 """
3195 """
3191 choice = {}
3196 choice = {}
3192 debugchoice = {}
3197 debugchoice = {}
3193 for e in table.keys():
3198 for e in table.keys():
3194 aliases = e.lstrip("^").split("|")
3199 aliases = e.lstrip("^").split("|")
3195 found = None
3200 found = None
3196 if cmd in aliases:
3201 if cmd in aliases:
3197 found = cmd
3202 found = cmd
3198 else:
3203 else:
3199 for a in aliases:
3204 for a in aliases:
3200 if a.startswith(cmd):
3205 if a.startswith(cmd):
3201 found = a
3206 found = a
3202 break
3207 break
3203 if found is not None:
3208 if found is not None:
3204 if aliases[0].startswith("debug"):
3209 if aliases[0].startswith("debug"):
3205 debugchoice[found] = (aliases, table[e])
3210 debugchoice[found] = (aliases, table[e])
3206 else:
3211 else:
3207 choice[found] = (aliases, table[e])
3212 choice[found] = (aliases, table[e])
3208
3213
3209 if not choice and debugchoice:
3214 if not choice and debugchoice:
3210 choice = debugchoice
3215 choice = debugchoice
3211
3216
3212 return choice
3217 return choice
3213
3218
3214 def findcmd(cmd):
3219 def findcmd(cmd):
3215 """Return (aliases, command table entry) for command string."""
3220 """Return (aliases, command table entry) for command string."""
3216 choice = findpossible(cmd)
3221 choice = findpossible(cmd)
3217
3222
3218 if choice.has_key(cmd):
3223 if choice.has_key(cmd):
3219 return choice[cmd]
3224 return choice[cmd]
3220
3225
3221 if len(choice) > 1:
3226 if len(choice) > 1:
3222 clist = choice.keys()
3227 clist = choice.keys()
3223 clist.sort()
3228 clist.sort()
3224 raise AmbiguousCommand(cmd, clist)
3229 raise AmbiguousCommand(cmd, clist)
3225
3230
3226 if choice:
3231 if choice:
3227 return choice.values()[0]
3232 return choice.values()[0]
3228
3233
3229 raise UnknownCommand(cmd)
3234 raise UnknownCommand(cmd)
3230
3235
3231 def catchterm(*args):
3236 def catchterm(*args):
3232 raise util.SignalInterrupt
3237 raise util.SignalInterrupt
3233
3238
3234 def run():
3239 def run():
3235 sys.exit(dispatch(sys.argv[1:]))
3240 sys.exit(dispatch(sys.argv[1:]))
3236
3241
3237 class ParseError(Exception):
3242 class ParseError(Exception):
3238 """Exception raised on errors in parsing the command line."""
3243 """Exception raised on errors in parsing the command line."""
3239
3244
3240 def parse(ui, args):
3245 def parse(ui, args):
3241 options = {}
3246 options = {}
3242 cmdoptions = {}
3247 cmdoptions = {}
3243
3248
3244 try:
3249 try:
3245 args = fancyopts.fancyopts(args, globalopts, options)
3250 args = fancyopts.fancyopts(args, globalopts, options)
3246 except fancyopts.getopt.GetoptError, inst:
3251 except fancyopts.getopt.GetoptError, inst:
3247 raise ParseError(None, inst)
3252 raise ParseError(None, inst)
3248
3253
3249 if args:
3254 if args:
3250 cmd, args = args[0], args[1:]
3255 cmd, args = args[0], args[1:]
3251 aliases, i = findcmd(cmd)
3256 aliases, i = findcmd(cmd)
3252 cmd = aliases[0]
3257 cmd = aliases[0]
3253 defaults = ui.config("defaults", cmd)
3258 defaults = ui.config("defaults", cmd)
3254 if defaults:
3259 if defaults:
3255 args = defaults.split() + args
3260 args = defaults.split() + args
3256 c = list(i[1])
3261 c = list(i[1])
3257 else:
3262 else:
3258 cmd = None
3263 cmd = None
3259 c = []
3264 c = []
3260
3265
3261 # combine global options into local
3266 # combine global options into local
3262 for o in globalopts:
3267 for o in globalopts:
3263 c.append((o[0], o[1], options[o[1]], o[3]))
3268 c.append((o[0], o[1], options[o[1]], o[3]))
3264
3269
3265 try:
3270 try:
3266 args = fancyopts.fancyopts(args, c, cmdoptions)
3271 args = fancyopts.fancyopts(args, c, cmdoptions)
3267 except fancyopts.getopt.GetoptError, inst:
3272 except fancyopts.getopt.GetoptError, inst:
3268 raise ParseError(cmd, inst)
3273 raise ParseError(cmd, inst)
3269
3274
3270 # separate global options back out
3275 # separate global options back out
3271 for o in globalopts:
3276 for o in globalopts:
3272 n = o[1]
3277 n = o[1]
3273 options[n] = cmdoptions[n]
3278 options[n] = cmdoptions[n]
3274 del cmdoptions[n]
3279 del cmdoptions[n]
3275
3280
3276 return (cmd, cmd and i[0] or None, args, options, cmdoptions)
3281 return (cmd, cmd and i[0] or None, args, options, cmdoptions)
3277
3282
3278 external = {}
3283 external = {}
3279
3284
3280 def findext(name):
3285 def findext(name):
3281 '''return module with given extension name'''
3286 '''return module with given extension name'''
3282 try:
3287 try:
3283 return sys.modules[external[name]]
3288 return sys.modules[external[name]]
3284 except KeyError:
3289 except KeyError:
3285 dotname = '.' + name
3290 dotname = '.' + name
3286 for k, v in external.iteritems():
3291 for k, v in external.iteritems():
3287 if k.endswith('.' + name) or v == name:
3292 if k.endswith('.' + name) or v == name:
3288 return sys.modules[v]
3293 return sys.modules[v]
3289 raise KeyError(name)
3294 raise KeyError(name)
3290
3295
3291 def dispatch(args):
3296 def dispatch(args):
3292 for name in 'SIGBREAK', 'SIGHUP', 'SIGTERM':
3297 for name in 'SIGBREAK', 'SIGHUP', 'SIGTERM':
3293 num = getattr(signal, name, None)
3298 num = getattr(signal, name, None)
3294 if num: signal.signal(num, catchterm)
3299 if num: signal.signal(num, catchterm)
3295
3300
3296 try:
3301 try:
3297 u = ui.ui(traceback='--traceback' in sys.argv[1:])
3302 u = ui.ui(traceback='--traceback' in sys.argv[1:])
3298 except util.Abort, inst:
3303 except util.Abort, inst:
3299 sys.stderr.write(_("abort: %s\n") % inst)
3304 sys.stderr.write(_("abort: %s\n") % inst)
3300 return -1
3305 return -1
3301
3306
3302 for ext_name, load_from_name in u.extensions():
3307 for ext_name, load_from_name in u.extensions():
3303 try:
3308 try:
3304 if load_from_name:
3309 if load_from_name:
3305 # the module will be loaded in sys.modules
3310 # the module will be loaded in sys.modules
3306 # choose an unique name so that it doesn't
3311 # choose an unique name so that it doesn't
3307 # conflicts with other modules
3312 # conflicts with other modules
3308 module_name = "hgext_%s" % ext_name.replace('.', '_')
3313 module_name = "hgext_%s" % ext_name.replace('.', '_')
3309 mod = imp.load_source(module_name, load_from_name)
3314 mod = imp.load_source(module_name, load_from_name)
3310 else:
3315 else:
3311 def importh(name):
3316 def importh(name):
3312 mod = __import__(name)
3317 mod = __import__(name)
3313 components = name.split('.')
3318 components = name.split('.')
3314 for comp in components[1:]:
3319 for comp in components[1:]:
3315 mod = getattr(mod, comp)
3320 mod = getattr(mod, comp)
3316 return mod
3321 return mod
3317 try:
3322 try:
3318 mod = importh("hgext.%s" % ext_name)
3323 mod = importh("hgext.%s" % ext_name)
3319 except ImportError:
3324 except ImportError:
3320 mod = importh(ext_name)
3325 mod = importh(ext_name)
3321 external[ext_name] = mod.__name__
3326 external[ext_name] = mod.__name__
3322 except (util.SignalInterrupt, KeyboardInterrupt):
3327 except (util.SignalInterrupt, KeyboardInterrupt):
3323 raise
3328 raise
3324 except Exception, inst:
3329 except Exception, inst:
3325 u.warn(_("*** failed to import extension %s: %s\n") % (x[0], inst))
3330 u.warn(_("*** failed to import extension %s: %s\n") % (ext_name, inst))
3326 if u.print_exc():
3331 if u.print_exc():
3327 return 1
3332 return 1
3328
3333
3329 for name in external.itervalues():
3334 for name in external.itervalues():
3330 mod = sys.modules[name]
3335 mod = sys.modules[name]
3331 uisetup = getattr(mod, 'uisetup', None)
3336 uisetup = getattr(mod, 'uisetup', None)
3332 if uisetup:
3337 if uisetup:
3333 uisetup(u)
3338 uisetup(u)
3334 cmdtable = getattr(mod, 'cmdtable', {})
3339 cmdtable = getattr(mod, 'cmdtable', {})
3335 for t in cmdtable:
3340 for t in cmdtable:
3336 if t in table:
3341 if t in table:
3337 u.warn(_("module %s overrides %s\n") % (name, t))
3342 u.warn(_("module %s overrides %s\n") % (name, t))
3338 table.update(cmdtable)
3343 table.update(cmdtable)
3339
3344
3340 try:
3345 try:
3341 cmd, func, args, options, cmdoptions = parse(u, args)
3346 cmd, func, args, options, cmdoptions = parse(u, args)
3342 if options["time"]:
3347 if options["time"]:
3343 def get_times():
3348 def get_times():
3344 t = os.times()
3349 t = os.times()
3345 if t[4] == 0.0: # Windows leaves this as zero, so use time.clock()
3350 if t[4] == 0.0: # Windows leaves this as zero, so use time.clock()
3346 t = (t[0], t[1], t[2], t[3], time.clock())
3351 t = (t[0], t[1], t[2], t[3], time.clock())
3347 return t
3352 return t
3348 s = get_times()
3353 s = get_times()
3349 def print_time():
3354 def print_time():
3350 t = get_times()
3355 t = get_times()
3351 u.warn(_("Time: real %.3f secs (user %.3f+%.3f sys %.3f+%.3f)\n") %
3356 u.warn(_("Time: real %.3f secs (user %.3f+%.3f sys %.3f+%.3f)\n") %
3352 (t[4]-s[4], t[0]-s[0], t[2]-s[2], t[1]-s[1], t[3]-s[3]))
3357 (t[4]-s[4], t[0]-s[0], t[2]-s[2], t[1]-s[1], t[3]-s[3]))
3353 atexit.register(print_time)
3358 atexit.register(print_time)
3354
3359
3355 u.updateopts(options["verbose"], options["debug"], options["quiet"],
3360 u.updateopts(options["verbose"], options["debug"], options["quiet"],
3356 not options["noninteractive"], options["traceback"],
3361 not options["noninteractive"], options["traceback"],
3357 options["config"])
3362 options["config"])
3358
3363
3359 # enter the debugger before command execution
3364 # enter the debugger before command execution
3360 if options['debugger']:
3365 if options['debugger']:
3361 pdb.set_trace()
3366 pdb.set_trace()
3362
3367
3363 try:
3368 try:
3364 if options['cwd']:
3369 if options['cwd']:
3365 try:
3370 try:
3366 os.chdir(options['cwd'])
3371 os.chdir(options['cwd'])
3367 except OSError, inst:
3372 except OSError, inst:
3368 raise util.Abort('%s: %s' %
3373 raise util.Abort('%s: %s' %
3369 (options['cwd'], inst.strerror))
3374 (options['cwd'], inst.strerror))
3370
3375
3371 path = u.expandpath(options["repository"]) or ""
3376 path = u.expandpath(options["repository"]) or ""
3372 repo = path and hg.repository(u, path=path) or None
3377 repo = path and hg.repository(u, path=path) or None
3373
3378
3374 if options['help']:
3379 if options['help']:
3375 return help_(u, cmd, options['version'])
3380 return help_(u, cmd, options['version'])
3376 elif options['version']:
3381 elif options['version']:
3377 return show_version(u)
3382 return show_version(u)
3378 elif not cmd:
3383 elif not cmd:
3379 return help_(u, 'shortlist')
3384 return help_(u, 'shortlist')
3380
3385
3381 if cmd not in norepo.split():
3386 if cmd not in norepo.split():
3382 try:
3387 try:
3383 if not repo:
3388 if not repo:
3384 repo = hg.repository(u, path=path)
3389 repo = hg.repository(u, path=path)
3385 u = repo.ui
3390 u = repo.ui
3386 for name in external.itervalues():
3391 for name in external.itervalues():
3387 mod = sys.modules[name]
3392 mod = sys.modules[name]
3388 if hasattr(mod, 'reposetup'):
3393 if hasattr(mod, 'reposetup'):
3389 mod.reposetup(u, repo)
3394 mod.reposetup(u, repo)
3390 except hg.RepoError:
3395 except hg.RepoError:
3391 if cmd not in optionalrepo.split():
3396 if cmd not in optionalrepo.split():
3392 raise
3397 raise
3393 d = lambda: func(u, repo, *args, **cmdoptions)
3398 d = lambda: func(u, repo, *args, **cmdoptions)
3394 else:
3399 else:
3395 d = lambda: func(u, *args, **cmdoptions)
3400 d = lambda: func(u, *args, **cmdoptions)
3396
3401
3397 try:
3402 try:
3398 if options['profile']:
3403 if options['profile']:
3399 import hotshot, hotshot.stats
3404 import hotshot, hotshot.stats
3400 prof = hotshot.Profile("hg.prof")
3405 prof = hotshot.Profile("hg.prof")
3401 try:
3406 try:
3402 try:
3407 try:
3403 return prof.runcall(d)
3408 return prof.runcall(d)
3404 except:
3409 except:
3405 try:
3410 try:
3406 u.warn(_('exception raised - generating '
3411 u.warn(_('exception raised - generating '
3407 'profile anyway\n'))
3412 'profile anyway\n'))
3408 except:
3413 except:
3409 pass
3414 pass
3410 raise
3415 raise
3411 finally:
3416 finally:
3412 prof.close()
3417 prof.close()
3413 stats = hotshot.stats.load("hg.prof")
3418 stats = hotshot.stats.load("hg.prof")
3414 stats.strip_dirs()
3419 stats.strip_dirs()
3415 stats.sort_stats('time', 'calls')
3420 stats.sort_stats('time', 'calls')
3416 stats.print_stats(40)
3421 stats.print_stats(40)
3417 elif options['lsprof']:
3422 elif options['lsprof']:
3418 try:
3423 try:
3419 from mercurial import lsprof
3424 from mercurial import lsprof
3420 except ImportError:
3425 except ImportError:
3421 raise util.Abort(_(
3426 raise util.Abort(_(
3422 'lsprof not available - install from '
3427 'lsprof not available - install from '
3423 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/'))
3428 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/'))
3424 p = lsprof.Profiler()
3429 p = lsprof.Profiler()
3425 p.enable(subcalls=True)
3430 p.enable(subcalls=True)
3426 try:
3431 try:
3427 return d()
3432 return d()
3428 finally:
3433 finally:
3429 p.disable()
3434 p.disable()
3430 stats = lsprof.Stats(p.getstats())
3435 stats = lsprof.Stats(p.getstats())
3431 stats.sort()
3436 stats.sort()
3432 stats.pprint(top=10, file=sys.stderr, climit=5)
3437 stats.pprint(top=10, file=sys.stderr, climit=5)
3433 else:
3438 else:
3434 return d()
3439 return d()
3435 finally:
3440 finally:
3436 u.flush()
3441 u.flush()
3437 except:
3442 except:
3438 # enter the debugger when we hit an exception
3443 # enter the debugger when we hit an exception
3439 if options['debugger']:
3444 if options['debugger']:
3440 pdb.post_mortem(sys.exc_info()[2])
3445 pdb.post_mortem(sys.exc_info()[2])
3441 u.print_exc()
3446 u.print_exc()
3442 raise
3447 raise
3443 except ParseError, inst:
3448 except ParseError, inst:
3444 if inst.args[0]:
3449 if inst.args[0]:
3445 u.warn(_("hg %s: %s\n") % (inst.args[0], inst.args[1]))
3450 u.warn(_("hg %s: %s\n") % (inst.args[0], inst.args[1]))
3446 help_(u, inst.args[0])
3451 help_(u, inst.args[0])
3447 else:
3452 else:
3448 u.warn(_("hg: %s\n") % inst.args[1])
3453 u.warn(_("hg: %s\n") % inst.args[1])
3449 help_(u, 'shortlist')
3454 help_(u, 'shortlist')
3450 except AmbiguousCommand, inst:
3455 except AmbiguousCommand, inst:
3451 u.warn(_("hg: command '%s' is ambiguous:\n %s\n") %
3456 u.warn(_("hg: command '%s' is ambiguous:\n %s\n") %
3452 (inst.args[0], " ".join(inst.args[1])))
3457 (inst.args[0], " ".join(inst.args[1])))
3453 except UnknownCommand, inst:
3458 except UnknownCommand, inst:
3454 u.warn(_("hg: unknown command '%s'\n") % inst.args[0])
3459 u.warn(_("hg: unknown command '%s'\n") % inst.args[0])
3455 help_(u, 'shortlist')
3460 help_(u, 'shortlist')
3456 except hg.RepoError, inst:
3461 except hg.RepoError, inst:
3457 u.warn(_("abort: %s!\n") % inst)
3462 u.warn(_("abort: %s!\n") % inst)
3458 except lock.LockHeld, inst:
3463 except lock.LockHeld, inst:
3459 if inst.errno == errno.ETIMEDOUT:
3464 if inst.errno == errno.ETIMEDOUT:
3460 reason = _('timed out waiting for lock held by %s') % inst.locker
3465 reason = _('timed out waiting for lock held by %s') % inst.locker
3461 else:
3466 else:
3462 reason = _('lock held by %s') % inst.locker
3467 reason = _('lock held by %s') % inst.locker
3463 u.warn(_("abort: %s: %s\n") % (inst.desc or inst.filename, reason))
3468 u.warn(_("abort: %s: %s\n") % (inst.desc or inst.filename, reason))
3464 except lock.LockUnavailable, inst:
3469 except lock.LockUnavailable, inst:
3465 u.warn(_("abort: could not lock %s: %s\n") %
3470 u.warn(_("abort: could not lock %s: %s\n") %
3466 (inst.desc or inst.filename, inst.strerror))
3471 (inst.desc or inst.filename, inst.strerror))
3467 except revlog.RevlogError, inst:
3472 except revlog.RevlogError, inst:
3468 u.warn(_("abort: "), inst, "!\n")
3473 u.warn(_("abort: "), inst, "!\n")
3469 except util.SignalInterrupt:
3474 except util.SignalInterrupt:
3470 u.warn(_("killed!\n"))
3475 u.warn(_("killed!\n"))
3471 except KeyboardInterrupt:
3476 except KeyboardInterrupt:
3472 try:
3477 try:
3473 u.warn(_("interrupted!\n"))
3478 u.warn(_("interrupted!\n"))
3474 except IOError, inst:
3479 except IOError, inst:
3475 if inst.errno == errno.EPIPE:
3480 if inst.errno == errno.EPIPE:
3476 if u.debugflag:
3481 if u.debugflag:
3477 u.warn(_("\nbroken pipe\n"))
3482 u.warn(_("\nbroken pipe\n"))
3478 else:
3483 else:
3479 raise
3484 raise
3480 except IOError, inst:
3485 except IOError, inst:
3481 if hasattr(inst, "code"):
3486 if hasattr(inst, "code"):
3482 u.warn(_("abort: %s\n") % inst)
3487 u.warn(_("abort: %s\n") % inst)
3483 elif hasattr(inst, "reason"):
3488 elif hasattr(inst, "reason"):
3484 u.warn(_("abort: error: %s\n") % inst.reason[1])
3489 u.warn(_("abort: error: %s\n") % inst.reason[1])
3485 elif hasattr(inst, "args") and inst[0] == errno.EPIPE:
3490 elif hasattr(inst, "args") and inst[0] == errno.EPIPE:
3486 if u.debugflag:
3491 if u.debugflag:
3487 u.warn(_("broken pipe\n"))
3492 u.warn(_("broken pipe\n"))
3488 elif getattr(inst, "strerror", None):
3493 elif getattr(inst, "strerror", None):
3489 if getattr(inst, "filename", None):
3494 if getattr(inst, "filename", None):
3490 u.warn(_("abort: %s - %s\n") % (inst.strerror, inst.filename))
3495 u.warn(_("abort: %s - %s\n") % (inst.strerror, inst.filename))
3491 else:
3496 else:
3492 u.warn(_("abort: %s\n") % inst.strerror)
3497 u.warn(_("abort: %s\n") % inst.strerror)
3493 else:
3498 else:
3494 raise
3499 raise
3495 except OSError, inst:
3500 except OSError, inst:
3496 if hasattr(inst, "filename"):
3501 if hasattr(inst, "filename"):
3497 u.warn(_("abort: %s: %s\n") % (inst.strerror, inst.filename))
3502 u.warn(_("abort: %s: %s\n") % (inst.strerror, inst.filename))
3498 else:
3503 else:
3499 u.warn(_("abort: %s\n") % inst.strerror)
3504 u.warn(_("abort: %s\n") % inst.strerror)
3500 except util.Abort, inst:
3505 except util.Abort, inst:
3501 u.warn(_('abort: '), inst.args[0] % inst.args[1:], '\n')
3506 u.warn(_('abort: '), inst.args[0] % inst.args[1:], '\n')
3502 except TypeError, inst:
3507 except TypeError, inst:
3503 # was this an argument error?
3508 # was this an argument error?
3504 tb = traceback.extract_tb(sys.exc_info()[2])
3509 tb = traceback.extract_tb(sys.exc_info()[2])
3505 if len(tb) > 2: # no
3510 if len(tb) > 2: # no
3506 raise
3511 raise
3507 u.debug(inst, "\n")
3512 u.debug(inst, "\n")
3508 u.warn(_("%s: invalid arguments\n") % cmd)
3513 u.warn(_("%s: invalid arguments\n") % cmd)
3509 help_(u, cmd)
3514 help_(u, cmd)
3510 except SystemExit, inst:
3515 except SystemExit, inst:
3511 # Commands shouldn't sys.exit directly, but give a return code.
3516 # Commands shouldn't sys.exit directly, but give a return code.
3512 # Just in case catch this and and pass exit code to caller.
3517 # Just in case catch this and and pass exit code to caller.
3513 return inst.code
3518 return inst.code
3514 except:
3519 except:
3515 u.warn(_("** unknown exception encountered, details follow\n"))
3520 u.warn(_("** unknown exception encountered, details follow\n"))
3516 u.warn(_("** report bug details to mercurial@selenic.com\n"))
3521 u.warn(_("** report bug details to "
3522 "http://www.selenic.com/mercurial/bts\n"))
3523 u.warn(_("** or mercurial@selenic.com\n"))
3517 u.warn(_("** Mercurial Distributed SCM (version %s)\n")
3524 u.warn(_("** Mercurial Distributed SCM (version %s)\n")
3518 % version.get_version())
3525 % version.get_version())
3519 raise
3526 raise
3520
3527
3521 return -1
3528 return -1
@@ -1,124 +1,126 b''
1 # context.py - changeset and file context objects for mercurial
1 # context.py - changeset and file context objects for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 class changectx(object):
8 class changectx(object):
9 """A changecontext object makes access to data related to a particular
9 """A changecontext object makes access to data related to a particular
10 changeset convenient."""
10 changeset convenient."""
11 def __init__(self, repo, changeid):
11 def __init__(self, repo, changeid):
12 """changeid is a revision number, node, or tag"""
12 """changeid is a revision number, node, or tag"""
13 self._repo = repo
13 self._repo = repo
14 self._id = changeid
14 self._id = changeid
15
15
16 self._node = self._repo.lookup(self._id)
16 self._node = self._repo.lookup(self._id)
17 self._rev = self._repo.changelog.rev(self._node)
17 self._rev = self._repo.changelog.rev(self._node)
18
18
19 def changeset(self):
19 def changeset(self):
20 try:
20 try:
21 return self._changeset
21 return self._changeset
22 except AttributeError:
22 except AttributeError:
23 self._changeset = self._repo.changelog.read(self.node())
23 self._changeset = self._repo.changelog.read(self.node())
24 return self._changeset
24 return self._changeset
25
25
26 def manifest(self):
26 def manifest(self):
27 try:
27 try:
28 return self._manifest
28 return self._manifest
29 except AttributeError:
29 except AttributeError:
30 self._manifest = self._repo.manifest.read(self.changeset()[0])
30 self._manifest = self._repo.manifest.read(self.changeset()[0])
31 return self._manifest
31 return self._manifest
32
32
33 def rev(self): return self._rev
33 def rev(self): return self._rev
34 def node(self): return self._node
34 def node(self): return self._node
35 def user(self): return self.changeset()[1]
35 def user(self): return self.changeset()[1]
36 def date(self): return self.changeset()[2]
36 def date(self): return self.changeset()[2]
37 def changedfiles(self): return self.changeset()[3]
37 def changedfiles(self): return self.changeset()[3]
38 def description(self): return self.changeset()[4]
38 def description(self): return self.changeset()[4]
39
39
40 def parents(self):
40 def parents(self):
41 """return contexts for each parent changeset"""
41 """return contexts for each parent changeset"""
42 p = self.repo.changelog.parents(self._node)
42 p = self._repo.changelog.parents(self._node)
43 return [ changectx(self._repo, x) for x in p ]
43 return [ changectx(self._repo, x) for x in p ]
44
44
45 def children(self):
45 def children(self):
46 """return contexts for each child changeset"""
46 """return contexts for each child changeset"""
47 c = self.repo.changelog.children(self._node)
47 c = self._repo.changelog.children(self._node)
48 return [ changectx(self._repo, x) for x in c ]
48 return [ changectx(self._repo, x) for x in c ]
49
49
50 def filenode(self, path):
50 def filenode(self, path):
51 node, flag = self._repo.manifest.find(self.changeset()[0], path)
51 node, flag = self._repo.manifest.find(self.changeset()[0], path)
52 return node
52 return node
53
53
54 def filectx(self, path):
54 def filectx(self, path, fileid=None):
55 """get a file context from this changeset"""
55 """get a file context from this changeset"""
56 return filectx(self._repo, path, fileid=self.filenode(path))
56 if fileid is None:
57 fileid = self.filenode(path)
58 return filectx(self._repo, path, fileid=fileid)
57
59
58 def filectxs(self):
60 def filectxs(self):
59 """generate a file context for each file in this changeset's
61 """generate a file context for each file in this changeset's
60 manifest"""
62 manifest"""
61 mf = self.manifest()
63 mf = self.manifest()
62 m = mf.keys()
64 m = mf.keys()
63 m.sort()
65 m.sort()
64 for f in m:
66 for f in m:
65 yield self.filectx(f, fileid=mf[f])
67 yield self.filectx(f, fileid=mf[f])
66
68
67 class filectx(object):
69 class filectx(object):
68 """A filecontext object makes access to data related to a particular
70 """A filecontext object makes access to data related to a particular
69 filerevision convenient."""
71 filerevision convenient."""
70 def __init__(self, repo, path, changeid=None, fileid=None):
72 def __init__(self, repo, path, changeid=None, fileid=None):
71 """changeid can be a changeset revision, node, or tag.
73 """changeid can be a changeset revision, node, or tag.
72 fileid can be a file revision or node."""
74 fileid can be a file revision or node."""
73 self._repo = repo
75 self._repo = repo
74 self._path = path
76 self._path = path
75 self._id = changeid
77 self._id = changeid
76 self._fileid = fileid
78 self._fileid = fileid
77
79
78 if self._id:
80 if self._id:
79 # if given a changeset id, go ahead and look up the file
81 # if given a changeset id, go ahead and look up the file
80 self._changeset = changectx(repo, self._id)
82 self._changeset = self._repo.changelog.read(self._id)
81 node, flag = self._repo.manifest.find(self._changeset[0], path)
83 node, flag = self._repo.manifest.find(self._changeset[0], path)
82 self._node = node
84 self._filelog = self._repo.file(self._path)
83 self._filelog = self.repo.file(self._path)
85 self._filenode = node
84 elif self._fileid:
86 elif self._fileid:
85 # else be lazy
87 # else be lazy
86 self._filelog = self._repo.file(self._path)
88 self._filelog = self._repo.file(self._path)
87 self._filenode = self._filelog.lookup(self._fileid)
89 self._filenode = self._filelog.lookup(self._fileid)
88 self._filerev = self._filelog.rev(self._filenode)
90 self._filerev = self._filelog.rev(self._filenode)
89
91
90 def changeset(self):
92 def changeset(self):
91 try:
93 try:
92 return self._changeset
94 return self._changeset
93 except AttributeError:
95 except AttributeError:
94 self._changeset = self._repo.changelog.read(self.node())
96 self._changeset = self._repo.changelog.read(self.node())
95 return self._changeset
97 return self._changeset
96
98
97 def filerev(self): return self._filerev
99 def filerev(self): return self._filerev
98 def filenode(self): return self._filenode
100 def filenode(self): return self._filenode
99 def filelog(self): return self._filelog
101 def filelog(self): return self._filelog
100
102
101 def rev(self): return self.changeset().rev()
103 def rev(self): return self.changeset().rev()
102 def node(self): return self.changeset().node()
104 def node(self): return self.changeset().node()
103 def user(self): return self.changeset().user()
105 def user(self): return self.changeset().user()
104 def date(self): return self.changeset().date()
106 def date(self): return self.changeset().date()
105 def files(self): return self.changeset().files()
107 def files(self): return self.changeset().files()
106 def description(self): return self.changeset().description()
108 def description(self): return self.changeset().description()
107 def manifest(self): return self.changeset().manifest()
109 def manifest(self): return self.changeset().manifest()
108
110
109 def data(self): return self._filelog.read(self._filenode)
111 def data(self): return self._filelog.read(self._filenode)
110 def metadata(self): return self._filelog.readmeta(self._filenode)
112 def metadata(self): return self._filelog.readmeta(self._filenode)
111 def renamed(self): return self._filelog.renamed(self._filenode)
113 def renamed(self): return self._filelog.renamed(self._filenode)
112
114
113 def parents(self):
115 def parents(self):
114 # need to fix for renames
116 # need to fix for renames
115 p = self._filelog.parents(self._filenode)
117 p = self._filelog.parents(self._filenode)
116 return [ filectx(self._repo, self._path, fileid=x) for x in p ]
118 return [ filectx(self._repo, self._path, fileid=x) for x in p ]
117
119
118 def children(self):
120 def children(self):
119 # hard for renames
121 # hard for renames
120 c = self._filelog.children(self._filenode)
122 c = self._filelog.children(self._filenode)
121 return [ filectx(self._repo, self._path, fileid=x) for x in c ]
123 return [ filectx(self._repo, self._path, fileid=x) for x in c ]
122
124
123 def annotate(self):
125 def annotate(self):
124 return self._filelog.annotate(self._filenode)
126 return self._filelog.annotate(self._filenode)
@@ -1,208 +1,209 b''
1 # hg.py - repository classes for mercurial
1 # hg.py - repository classes for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import *
8 from node import *
9 from repo import *
9 from repo import *
10 from demandload import *
10 from demandload import *
11 from i18n import gettext as _
11 from i18n import gettext as _
12 demandload(globals(), "localrepo bundlerepo httprepo sshrepo statichttprepo")
12 demandload(globals(), "localrepo bundlerepo httprepo sshrepo statichttprepo")
13 demandload(globals(), "errno lock os shutil util")
13 demandload(globals(), "errno lock os shutil util")
14
14
15 def bundle(ui, path):
15 def bundle(ui, path):
16 if path.startswith('bundle://'):
16 if path.startswith('bundle://'):
17 path = path[9:]
17 path = path[9:]
18 else:
18 else:
19 path = path[7:]
19 path = path[7:]
20 s = path.split("+", 1)
20 s = path.split("+", 1)
21 if len(s) == 1:
21 if len(s) == 1:
22 repopath, bundlename = "", s[0]
22 repopath, bundlename = "", s[0]
23 else:
23 else:
24 repopath, bundlename = s
24 repopath, bundlename = s
25 return bundlerepo.bundlerepository(ui, repopath, bundlename)
25 return bundlerepo.bundlerepository(ui, repopath, bundlename)
26
26
27 def hg(ui, path):
27 def hg(ui, path):
28 ui.warn(_("hg:// syntax is deprecated, please use http:// instead\n"))
28 ui.warn(_("hg:// syntax is deprecated, please use http:// instead\n"))
29 return httprepo.httprepository(ui, path.replace("hg://", "http://"))
29 return httprepo.httprepository(ui, path.replace("hg://", "http://"))
30
30
31 def local_(ui, path, create=0):
31 def local_(ui, path, create=0):
32 if path.startswith('file:'):
32 if path.startswith('file:'):
33 path = path[5:]
33 path = path[5:]
34 return localrepo.localrepository(ui, path, create)
34 return localrepo.localrepository(ui, path, create)
35
35
36 def ssh_(ui, path, create=0):
36 def ssh_(ui, path, create=0):
37 return sshrepo.sshrepository(ui, path, create)
37 return sshrepo.sshrepository(ui, path, create)
38
38
39 def old_http(ui, path):
39 def old_http(ui, path):
40 ui.warn(_("old-http:// syntax is deprecated, "
40 ui.warn(_("old-http:// syntax is deprecated, "
41 "please use static-http:// instead\n"))
41 "please use static-http:// instead\n"))
42 return statichttprepo.statichttprepository(
42 return statichttprepo.statichttprepository(
43 ui, path.replace("old-http://", "http://"))
43 ui, path.replace("old-http://", "http://"))
44
44
45 def static_http(ui, path):
45 def static_http(ui, path):
46 return statichttprepo.statichttprepository(
46 return statichttprepo.statichttprepository(
47 ui, path.replace("static-http://", "http://"))
47 ui, path.replace("static-http://", "http://"))
48
48
49 schemes = {
49 schemes = {
50 'bundle': bundle,
50 'bundle': bundle,
51 'file': local_,
51 'file': local_,
52 'hg': hg,
52 'hg': hg,
53 'http': lambda ui, path: httprepo.httprepository(ui, path),
53 'http': lambda ui, path: httprepo.httprepository(ui, path),
54 'https': lambda ui, path: httprepo.httpsrepository(ui, path),
54 'https': lambda ui, path: httprepo.httpsrepository(ui, path),
55 'old-http': old_http,
55 'old-http': old_http,
56 'ssh': ssh_,
56 'ssh': ssh_,
57 'static-http': static_http,
57 'static-http': static_http,
58 }
58 }
59
59
60 def repository(ui, path=None, create=0):
60 def repository(ui, path=None, create=0):
61 scheme = None
61 scheme = None
62 if path:
62 if path:
63 c = path.find(':')
63 c = path.find(':')
64 if c > 0:
64 if c > 0:
65 scheme = schemes.get(path[:c])
65 scheme = schemes.get(path[:c])
66 else:
66 else:
67 path = ''
67 path = ''
68 ctor = scheme or schemes['file']
68 ctor = scheme or schemes['file']
69 if create:
69 if create:
70 try:
70 try:
71 return ctor(ui, path, create)
71 return ctor(ui, path, create)
72 except TypeError:
72 except TypeError:
73 raise util.Abort(_('cannot create new repository over "%s" protocol') %
73 raise util.Abort(_('cannot create new repository over "%s" protocol') %
74 scheme)
74 scheme)
75 return ctor(ui, path)
75 return ctor(ui, path)
76
76
77 def clone(ui, source, dest=None, pull=False, rev=None, update=True,
77 def clone(ui, source, dest=None, pull=False, rev=None, update=True,
78 stream=False):
78 stream=False):
79 """Make a copy of an existing repository.
79 """Make a copy of an existing repository.
80
80
81 Create a copy of an existing repository in a new directory. The
81 Create a copy of an existing repository in a new directory. The
82 source and destination are URLs, as passed to the repository
82 source and destination are URLs, as passed to the repository
83 function. Returns a pair of repository objects, the source and
83 function. Returns a pair of repository objects, the source and
84 newly created destination.
84 newly created destination.
85
85
86 The location of the source is added to the new repository's
86 The location of the source is added to the new repository's
87 .hg/hgrc file, as the default to be used for future pulls and
87 .hg/hgrc file, as the default to be used for future pulls and
88 pushes.
88 pushes.
89
89
90 If an exception is raised, the partly cloned/updated destination
90 If an exception is raised, the partly cloned/updated destination
91 repository will be deleted.
91 repository will be deleted.
92
92
93 Keyword arguments:
93 Keyword arguments:
94
94
95 dest: URL of destination repository to create (defaults to base
95 dest: URL of destination repository to create (defaults to base
96 name of source repository)
96 name of source repository)
97
97
98 pull: always pull from source repository, even in local case
98 pull: always pull from source repository, even in local case
99
99
100 stream: stream from repository (fast over LAN, slow over WAN)
100 stream: stream raw data uncompressed from repository (fast over
101 LAN, slow over WAN)
101
102
102 rev: revision to clone up to (implies pull=True)
103 rev: revision to clone up to (implies pull=True)
103
104
104 update: update working directory after clone completes, if
105 update: update working directory after clone completes, if
105 destination is local repository
106 destination is local repository
106 """
107 """
107 if dest is None:
108 if dest is None:
108 dest = os.path.basename(os.path.normpath(source))
109 dest = os.path.basename(os.path.normpath(source))
109
110
110 if os.path.exists(dest):
111 if os.path.exists(dest):
111 raise util.Abort(_("destination '%s' already exists"), dest)
112 raise util.Abort(_("destination '%s' already exists"), dest)
112
113
113 class DirCleanup(object):
114 class DirCleanup(object):
114 def __init__(self, dir_):
115 def __init__(self, dir_):
115 self.rmtree = shutil.rmtree
116 self.rmtree = shutil.rmtree
116 self.dir_ = dir_
117 self.dir_ = dir_
117 def close(self):
118 def close(self):
118 self.dir_ = None
119 self.dir_ = None
119 def __del__(self):
120 def __del__(self):
120 if self.dir_:
121 if self.dir_:
121 self.rmtree(self.dir_, True)
122 self.rmtree(self.dir_, True)
122
123
123 src_repo = repository(ui, source)
124 src_repo = repository(ui, source)
124
125
125 dest_repo = None
126 dest_repo = None
126 try:
127 try:
127 dest_repo = repository(ui, dest)
128 dest_repo = repository(ui, dest)
128 raise util.Abort(_("destination '%s' already exists." % dest))
129 raise util.Abort(_("destination '%s' already exists." % dest))
129 except RepoError:
130 except RepoError:
130 dest_repo = repository(ui, dest, create=True)
131 dest_repo = repository(ui, dest, create=True)
131
132
132 dest_path = None
133 dest_path = None
133 dir_cleanup = None
134 dir_cleanup = None
134 if dest_repo.local():
135 if dest_repo.local():
135 dest_path = os.path.realpath(dest)
136 dest_path = os.path.realpath(dest)
136 dir_cleanup = DirCleanup(dest_path)
137 dir_cleanup = DirCleanup(dest_path)
137
138
138 abspath = source
139 abspath = source
139 copy = False
140 copy = False
140 if src_repo.local() and dest_repo.local():
141 if src_repo.local() and dest_repo.local():
141 abspath = os.path.abspath(source)
142 abspath = os.path.abspath(source)
142 copy = not pull and not rev
143 copy = not pull and not rev
143
144
144 src_lock, dest_lock = None, None
145 src_lock, dest_lock = None, None
145 if copy:
146 if copy:
146 try:
147 try:
147 # we use a lock here because if we race with commit, we
148 # we use a lock here because if we race with commit, we
148 # can end up with extra data in the cloned revlogs that's
149 # can end up with extra data in the cloned revlogs that's
149 # not pointed to by changesets, thus causing verify to
150 # not pointed to by changesets, thus causing verify to
150 # fail
151 # fail
151 src_lock = src_repo.lock()
152 src_lock = src_repo.lock()
152 except lock.LockException:
153 except lock.LockException:
153 copy = False
154 copy = False
154
155
155 if copy:
156 if copy:
156 # we lock here to avoid premature writing to the target
157 # we lock here to avoid premature writing to the target
157 dest_lock = lock.lock(os.path.join(dest_path, ".hg", "lock"))
158 dest_lock = lock.lock(os.path.join(dest_path, ".hg", "lock"))
158
159
159 # we need to remove the (empty) data dir in dest so copyfiles
160 # we need to remove the (empty) data dir in dest so copyfiles
160 # can do its work
161 # can do its work
161 os.rmdir(os.path.join(dest_path, ".hg", "data"))
162 os.rmdir(os.path.join(dest_path, ".hg", "data"))
162 files = "data 00manifest.d 00manifest.i 00changelog.d 00changelog.i"
163 files = "data 00manifest.d 00manifest.i 00changelog.d 00changelog.i"
163 for f in files.split():
164 for f in files.split():
164 src = os.path.join(source, ".hg", f)
165 src = os.path.join(source, ".hg", f)
165 dst = os.path.join(dest_path, ".hg", f)
166 dst = os.path.join(dest_path, ".hg", f)
166 try:
167 try:
167 util.copyfiles(src, dst)
168 util.copyfiles(src, dst)
168 except OSError, inst:
169 except OSError, inst:
169 if inst.errno != errno.ENOENT:
170 if inst.errno != errno.ENOENT:
170 raise
171 raise
171
172
172 # we need to re-init the repo after manually copying the data
173 # we need to re-init the repo after manually copying the data
173 # into it
174 # into it
174 dest_repo = repository(ui, dest)
175 dest_repo = repository(ui, dest)
175
176
176 else:
177 else:
177 revs = None
178 revs = None
178 if rev:
179 if rev:
179 if not src_repo.local():
180 if not src_repo.local():
180 raise util.Abort(_("clone by revision not supported yet "
181 raise util.Abort(_("clone by revision not supported yet "
181 "for remote repositories"))
182 "for remote repositories"))
182 revs = [src_repo.lookup(r) for r in rev]
183 revs = [src_repo.lookup(r) for r in rev]
183
184
184 if dest_repo.local():
185 if dest_repo.local():
185 dest_repo.clone(src_repo, heads=revs, stream=stream)
186 dest_repo.clone(src_repo, heads=revs, stream=stream)
186 elif src_repo.local():
187 elif src_repo.local():
187 src_repo.push(dest_repo, revs=revs)
188 src_repo.push(dest_repo, revs=revs)
188 else:
189 else:
189 raise util.Abort(_("clone from remote to remote not supported"))
190 raise util.Abort(_("clone from remote to remote not supported"))
190
191
191 if src_lock:
192 if src_lock:
192 src_lock.release()
193 src_lock.release()
193
194
194 if dest_repo.local():
195 if dest_repo.local():
195 fp = dest_repo.opener("hgrc", "w", text=True)
196 fp = dest_repo.opener("hgrc", "w", text=True)
196 fp.write("[paths]\n")
197 fp.write("[paths]\n")
197 fp.write("default = %s\n" % abspath)
198 fp.write("default = %s\n" % abspath)
198 fp.close()
199 fp.close()
199
200
200 if dest_lock:
201 if dest_lock:
201 dest_lock.release()
202 dest_lock.release()
202
203
203 if update:
204 if update:
204 dest_repo.update(dest_repo.changelog.tip())
205 dest_repo.update(dest_repo.changelog.tip())
205 if dir_cleanup:
206 if dir_cleanup:
206 dir_cleanup.close()
207 dir_cleanup.close()
207
208
208 return src_repo, dest_repo
209 return src_repo, dest_repo
@@ -1,957 +1,960 b''
1 # hgweb/hgweb_mod.py - Web interface for a repository.
1 # hgweb/hgweb_mod.py - Web interface for a repository.
2 #
2 #
3 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
3 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
4 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 # Copyright 2005 Matt Mackall <mpm@selenic.com>
5 #
5 #
6 # This software may be used and distributed according to the terms
6 # This software may be used and distributed according to the terms
7 # of the GNU General Public License, incorporated herein by reference.
7 # of the GNU General Public License, incorporated herein by reference.
8
8
9 import os
9 import os
10 import os.path
10 import os.path
11 import mimetypes
11 import mimetypes
12 from mercurial.demandload import demandload
12 from mercurial.demandload import demandload
13 demandload(globals(), "re zlib ConfigParser mimetools cStringIO sys tempfile")
13 demandload(globals(), "re zlib ConfigParser mimetools cStringIO sys tempfile")
14 demandload(globals(), "mercurial:mdiff,ui,hg,util,archival,streamclone")
14 demandload(globals(), "mercurial:mdiff,ui,hg,util,archival,streamclone")
15 demandload(globals(), "mercurial:templater")
15 demandload(globals(), "mercurial:templater")
16 demandload(globals(), "mercurial.hgweb.common:get_mtime,staticfile")
16 demandload(globals(), "mercurial.hgweb.common:get_mtime,staticfile")
17 from mercurial.node import *
17 from mercurial.node import *
18 from mercurial.i18n import gettext as _
18 from mercurial.i18n import gettext as _
19
19
20 def _up(p):
20 def _up(p):
21 if p[0] != "/":
21 if p[0] != "/":
22 p = "/" + p
22 p = "/" + p
23 if p[-1] == "/":
23 if p[-1] == "/":
24 p = p[:-1]
24 p = p[:-1]
25 up = os.path.dirname(p)
25 up = os.path.dirname(p)
26 if up == "/":
26 if up == "/":
27 return "/"
27 return "/"
28 return up + "/"
28 return up + "/"
29
29
30 class hgweb(object):
30 class hgweb(object):
31 def __init__(self, repo, name=None):
31 def __init__(self, repo, name=None):
32 if type(repo) == type(""):
32 if type(repo) == type(""):
33 self.repo = hg.repository(ui.ui(), repo)
33 self.repo = hg.repository(ui.ui(), repo)
34 else:
34 else:
35 self.repo = repo
35 self.repo = repo
36
36
37 self.mtime = -1
37 self.mtime = -1
38 self.reponame = name
38 self.reponame = name
39 self.archives = 'zip', 'gz', 'bz2'
39 self.archives = 'zip', 'gz', 'bz2'
40 self.templatepath = self.repo.ui.config("web", "templates",
40 self.templatepath = self.repo.ui.config("web", "templates",
41 templater.templatepath())
41 templater.templatepath())
42
42
43 def refresh(self):
43 def refresh(self):
44 mtime = get_mtime(self.repo.root)
44 mtime = get_mtime(self.repo.root)
45 if mtime != self.mtime:
45 if mtime != self.mtime:
46 self.mtime = mtime
46 self.mtime = mtime
47 self.repo = hg.repository(self.repo.ui, self.repo.root)
47 self.repo = hg.repository(self.repo.ui, self.repo.root)
48 self.maxchanges = int(self.repo.ui.config("web", "maxchanges", 10))
48 self.maxchanges = int(self.repo.ui.config("web", "maxchanges", 10))
49 self.maxfiles = int(self.repo.ui.config("web", "maxfiles", 10))
49 self.maxfiles = int(self.repo.ui.config("web", "maxfiles", 10))
50 self.allowpull = self.repo.ui.configbool("web", "allowpull", True)
50 self.allowpull = self.repo.ui.configbool("web", "allowpull", True)
51
51
52 def archivelist(self, nodeid):
52 def archivelist(self, nodeid):
53 allowed = self.repo.ui.configlist("web", "allow_archive")
53 allowed = self.repo.ui.configlist("web", "allow_archive")
54 for i in self.archives:
54 for i in self.archives:
55 if i in allowed or self.repo.ui.configbool("web", "allow" + i):
55 if i in allowed or self.repo.ui.configbool("web", "allow" + i):
56 yield {"type" : i, "node" : nodeid, "url": ""}
56 yield {"type" : i, "node" : nodeid, "url": ""}
57
57
58 def listfiles(self, files, mf):
58 def listfiles(self, files, mf):
59 for f in files[:self.maxfiles]:
59 for f in files[:self.maxfiles]:
60 yield self.t("filenodelink", node=hex(mf[f]), file=f)
60 yield self.t("filenodelink", node=hex(mf[f]), file=f)
61 if len(files) > self.maxfiles:
61 if len(files) > self.maxfiles:
62 yield self.t("fileellipses")
62 yield self.t("fileellipses")
63
63
64 def listfilediffs(self, files, changeset):
64 def listfilediffs(self, files, changeset):
65 for f in files[:self.maxfiles]:
65 for f in files[:self.maxfiles]:
66 yield self.t("filedifflink", node=hex(changeset), file=f)
66 yield self.t("filedifflink", node=hex(changeset), file=f)
67 if len(files) > self.maxfiles:
67 if len(files) > self.maxfiles:
68 yield self.t("fileellipses")
68 yield self.t("fileellipses")
69
69
70 def siblings(self, siblings=[], rev=None, hiderev=None, **args):
70 def siblings(self, siblings=[], rev=None, hiderev=None, **args):
71 if not rev:
71 if not rev:
72 rev = lambda x: ""
72 rev = lambda x: ""
73 siblings = [s for s in siblings if s != nullid]
73 siblings = [s for s in siblings if s != nullid]
74 if len(siblings) == 1 and rev(siblings[0]) == hiderev:
74 if len(siblings) == 1 and rev(siblings[0]) == hiderev:
75 return
75 return
76 for s in siblings:
76 for s in siblings:
77 yield dict(node=hex(s), rev=rev(s), **args)
77 yield dict(node=hex(s), rev=rev(s), **args)
78
78
79 def renamelink(self, fl, node):
79 def renamelink(self, fl, node):
80 r = fl.renamed(node)
80 r = fl.renamed(node)
81 if r:
81 if r:
82 return [dict(file=r[0], node=hex(r[1]))]
82 return [dict(file=r[0], node=hex(r[1]))]
83 return []
83 return []
84
84
85 def showtag(self, t1, node=nullid, **args):
85 def showtag(self, t1, node=nullid, **args):
86 for t in self.repo.nodetags(node):
86 for t in self.repo.nodetags(node):
87 yield self.t(t1, tag=t, **args)
87 yield self.t(t1, tag=t, **args)
88
88
89 def diff(self, node1, node2, files):
89 def diff(self, node1, node2, files):
90 def filterfiles(filters, files):
90 def filterfiles(filters, files):
91 l = [x for x in files if x in filters]
91 l = [x for x in files if x in filters]
92
92
93 for t in filters:
93 for t in filters:
94 if t and t[-1] != os.sep:
94 if t and t[-1] != os.sep:
95 t += os.sep
95 t += os.sep
96 l += [x for x in files if x.startswith(t)]
96 l += [x for x in files if x.startswith(t)]
97 return l
97 return l
98
98
99 parity = [0]
99 parity = [0]
100 def diffblock(diff, f, fn):
100 def diffblock(diff, f, fn):
101 yield self.t("diffblock",
101 yield self.t("diffblock",
102 lines=prettyprintlines(diff),
102 lines=prettyprintlines(diff),
103 parity=parity[0],
103 parity=parity[0],
104 file=f,
104 file=f,
105 filenode=hex(fn or nullid))
105 filenode=hex(fn or nullid))
106 parity[0] = 1 - parity[0]
106 parity[0] = 1 - parity[0]
107
107
108 def prettyprintlines(diff):
108 def prettyprintlines(diff):
109 for l in diff.splitlines(1):
109 for l in diff.splitlines(1):
110 if l.startswith('+'):
110 if l.startswith('+'):
111 yield self.t("difflineplus", line=l)
111 yield self.t("difflineplus", line=l)
112 elif l.startswith('-'):
112 elif l.startswith('-'):
113 yield self.t("difflineminus", line=l)
113 yield self.t("difflineminus", line=l)
114 elif l.startswith('@'):
114 elif l.startswith('@'):
115 yield self.t("difflineat", line=l)
115 yield self.t("difflineat", line=l)
116 else:
116 else:
117 yield self.t("diffline", line=l)
117 yield self.t("diffline", line=l)
118
118
119 r = self.repo
119 r = self.repo
120 cl = r.changelog
120 cl = r.changelog
121 mf = r.manifest
121 mf = r.manifest
122 change1 = cl.read(node1)
122 change1 = cl.read(node1)
123 change2 = cl.read(node2)
123 change2 = cl.read(node2)
124 mmap1 = mf.read(change1[0])
124 mmap1 = mf.read(change1[0])
125 mmap2 = mf.read(change2[0])
125 mmap2 = mf.read(change2[0])
126 date1 = util.datestr(change1[2])
126 date1 = util.datestr(change1[2])
127 date2 = util.datestr(change2[2])
127 date2 = util.datestr(change2[2])
128
128
129 modified, added, removed, deleted, unknown = r.changes(node1, node2)
129 modified, added, removed, deleted, unknown = r.changes(node1, node2)
130 if files:
130 if files:
131 modified, added, removed = map(lambda x: filterfiles(files, x),
131 modified, added, removed = map(lambda x: filterfiles(files, x),
132 (modified, added, removed))
132 (modified, added, removed))
133
133
134 diffopts = self.repo.ui.diffopts()
134 diffopts = self.repo.ui.diffopts()
135 showfunc = diffopts['showfunc']
135 showfunc = diffopts['showfunc']
136 ignorews = diffopts['ignorews']
136 ignorews = diffopts['ignorews']
137 ignorewsamount = diffopts['ignorewsamount']
137 ignorewsamount = diffopts['ignorewsamount']
138 ignoreblanklines = diffopts['ignoreblanklines']
138 ignoreblanklines = diffopts['ignoreblanklines']
139 for f in modified:
139 for f in modified:
140 to = r.file(f).read(mmap1[f])
140 to = r.file(f).read(mmap1[f])
141 tn = r.file(f).read(mmap2[f])
141 tn = r.file(f).read(mmap2[f])
142 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
142 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
143 showfunc=showfunc, ignorews=ignorews,
143 showfunc=showfunc, ignorews=ignorews,
144 ignorewsamount=ignorewsamount,
144 ignorewsamount=ignorewsamount,
145 ignoreblanklines=ignoreblanklines), f, tn)
145 ignoreblanklines=ignoreblanklines), f, tn)
146 for f in added:
146 for f in added:
147 to = None
147 to = None
148 tn = r.file(f).read(mmap2[f])
148 tn = r.file(f).read(mmap2[f])
149 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
149 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
150 showfunc=showfunc, ignorews=ignorews,
150 showfunc=showfunc, ignorews=ignorews,
151 ignorewsamount=ignorewsamount,
151 ignorewsamount=ignorewsamount,
152 ignoreblanklines=ignoreblanklines), f, tn)
152 ignoreblanklines=ignoreblanklines), f, tn)
153 for f in removed:
153 for f in removed:
154 to = r.file(f).read(mmap1[f])
154 to = r.file(f).read(mmap1[f])
155 tn = None
155 tn = None
156 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
156 yield diffblock(mdiff.unidiff(to, date1, tn, date2, f,
157 showfunc=showfunc, ignorews=ignorews,
157 showfunc=showfunc, ignorews=ignorews,
158 ignorewsamount=ignorewsamount,
158 ignorewsamount=ignorewsamount,
159 ignoreblanklines=ignoreblanklines), f, tn)
159 ignoreblanklines=ignoreblanklines), f, tn)
160
160
161 def changelog(self, pos):
161 def changelog(self, pos):
162 def changenav(**map):
162 def changenav(**map):
163 def seq(factor, maxchanges=None):
163 def seq(factor, maxchanges=None):
164 if maxchanges:
164 if maxchanges:
165 yield maxchanges
165 yield maxchanges
166 if maxchanges >= 20 and maxchanges <= 40:
166 if maxchanges >= 20 and maxchanges <= 40:
167 yield 50
167 yield 50
168 else:
168 else:
169 yield 1 * factor
169 yield 1 * factor
170 yield 3 * factor
170 yield 3 * factor
171 for f in seq(factor * 10):
171 for f in seq(factor * 10):
172 yield f
172 yield f
173
173
174 l = []
174 l = []
175 last = 0
175 last = 0
176 for f in seq(1, self.maxchanges):
176 for f in seq(1, self.maxchanges):
177 if f < self.maxchanges or f <= last:
177 if f < self.maxchanges or f <= last:
178 continue
178 continue
179 if f > count:
179 if f > count:
180 break
180 break
181 last = f
181 last = f
182 r = "%d" % f
182 r = "%d" % f
183 if pos + f < count:
183 if pos + f < count:
184 l.append(("+" + r, pos + f))
184 l.append(("+" + r, pos + f))
185 if pos - f >= 0:
185 if pos - f >= 0:
186 l.insert(0, ("-" + r, pos - f))
186 l.insert(0, ("-" + r, pos - f))
187
187
188 yield {"rev": 0, "label": "(0)"}
188 yield {"rev": 0, "label": "(0)"}
189
189
190 for label, rev in l:
190 for label, rev in l:
191 yield {"label": label, "rev": rev}
191 yield {"label": label, "rev": rev}
192
192
193 yield {"label": "tip", "rev": "tip"}
193 yield {"label": "tip", "rev": "tip"}
194
194
195 def changelist(**map):
195 def changelist(**map):
196 parity = (start - end) & 1
196 parity = (start - end) & 1
197 cl = self.repo.changelog
197 cl = self.repo.changelog
198 l = [] # build a list in forward order for efficiency
198 l = [] # build a list in forward order for efficiency
199 for i in range(start, end):
199 for i in range(start, end):
200 n = cl.node(i)
200 n = cl.node(i)
201 changes = cl.read(n)
201 changes = cl.read(n)
202 hn = hex(n)
202 hn = hex(n)
203
203
204 l.insert(0, {"parity": parity,
204 l.insert(0, {"parity": parity,
205 "author": changes[1],
205 "author": changes[1],
206 "parent": self.siblings(cl.parents(n), cl.rev,
206 "parent": self.siblings(cl.parents(n), cl.rev,
207 cl.rev(n) - 1),
207 cl.rev(n) - 1),
208 "child": self.siblings(cl.children(n), cl.rev,
208 "child": self.siblings(cl.children(n), cl.rev,
209 cl.rev(n) + 1),
209 cl.rev(n) + 1),
210 "changelogtag": self.showtag("changelogtag",n),
210 "changelogtag": self.showtag("changelogtag",n),
211 "manifest": hex(changes[0]),
211 "manifest": hex(changes[0]),
212 "desc": changes[4],
212 "desc": changes[4],
213 "date": changes[2],
213 "date": changes[2],
214 "files": self.listfilediffs(changes[3], n),
214 "files": self.listfilediffs(changes[3], n),
215 "rev": i,
215 "rev": i,
216 "node": hn})
216 "node": hn})
217 parity = 1 - parity
217 parity = 1 - parity
218
218
219 for e in l:
219 for e in l:
220 yield e
220 yield e
221
221
222 cl = self.repo.changelog
222 cl = self.repo.changelog
223 mf = cl.read(cl.tip())[0]
223 mf = cl.read(cl.tip())[0]
224 count = cl.count()
224 count = cl.count()
225 start = max(0, pos - self.maxchanges + 1)
225 start = max(0, pos - self.maxchanges + 1)
226 end = min(count, start + self.maxchanges)
226 end = min(count, start + self.maxchanges)
227 pos = end - 1
227 pos = end - 1
228
228
229 yield self.t('changelog',
229 yield self.t('changelog',
230 changenav=changenav,
230 changenav=changenav,
231 manifest=hex(mf),
231 manifest=hex(mf),
232 rev=pos, changesets=count, entries=changelist,
232 rev=pos, changesets=count, entries=changelist,
233 archives=self.archivelist("tip"))
233 archives=self.archivelist("tip"))
234
234
235 def search(self, query):
235 def search(self, query):
236
236
237 def changelist(**map):
237 def changelist(**map):
238 cl = self.repo.changelog
238 cl = self.repo.changelog
239 count = 0
239 count = 0
240 qw = query.lower().split()
240 qw = query.lower().split()
241
241
242 def revgen():
242 def revgen():
243 for i in range(cl.count() - 1, 0, -100):
243 for i in range(cl.count() - 1, 0, -100):
244 l = []
244 l = []
245 for j in range(max(0, i - 100), i):
245 for j in range(max(0, i - 100), i):
246 n = cl.node(j)
246 n = cl.node(j)
247 changes = cl.read(n)
247 changes = cl.read(n)
248 l.append((n, j, changes))
248 l.append((n, j, changes))
249 l.reverse()
249 l.reverse()
250 for e in l:
250 for e in l:
251 yield e
251 yield e
252
252
253 for n, i, changes in revgen():
253 for n, i, changes in revgen():
254 miss = 0
254 miss = 0
255 for q in qw:
255 for q in qw:
256 if not (q in changes[1].lower() or
256 if not (q in changes[1].lower() or
257 q in changes[4].lower() or
257 q in changes[4].lower() or
258 q in " ".join(changes[3][:20]).lower()):
258 q in " ".join(changes[3][:20]).lower()):
259 miss = 1
259 miss = 1
260 break
260 break
261 if miss:
261 if miss:
262 continue
262 continue
263
263
264 count += 1
264 count += 1
265 hn = hex(n)
265 hn = hex(n)
266
266
267 yield self.t('searchentry',
267 yield self.t('searchentry',
268 parity=count & 1,
268 parity=count & 1,
269 author=changes[1],
269 author=changes[1],
270 parent=self.siblings(cl.parents(n), cl.rev),
270 parent=self.siblings(cl.parents(n), cl.rev),
271 child=self.siblings(cl.children(n), cl.rev),
271 child=self.siblings(cl.children(n), cl.rev),
272 changelogtag=self.showtag("changelogtag",n),
272 changelogtag=self.showtag("changelogtag",n),
273 manifest=hex(changes[0]),
273 manifest=hex(changes[0]),
274 desc=changes[4],
274 desc=changes[4],
275 date=changes[2],
275 date=changes[2],
276 files=self.listfilediffs(changes[3], n),
276 files=self.listfilediffs(changes[3], n),
277 rev=i,
277 rev=i,
278 node=hn)
278 node=hn)
279
279
280 if count >= self.maxchanges:
280 if count >= self.maxchanges:
281 break
281 break
282
282
283 cl = self.repo.changelog
283 cl = self.repo.changelog
284 mf = cl.read(cl.tip())[0]
284 mf = cl.read(cl.tip())[0]
285
285
286 yield self.t('search',
286 yield self.t('search',
287 query=query,
287 query=query,
288 manifest=hex(mf),
288 manifest=hex(mf),
289 entries=changelist)
289 entries=changelist)
290
290
291 def changeset(self, nodeid):
291 def changeset(self, nodeid):
292 cl = self.repo.changelog
292 cl = self.repo.changelog
293 n = self.repo.lookup(nodeid)
293 n = self.repo.lookup(nodeid)
294 nodeid = hex(n)
294 nodeid = hex(n)
295 changes = cl.read(n)
295 changes = cl.read(n)
296 p1 = cl.parents(n)[0]
296 p1 = cl.parents(n)[0]
297
297
298 files = []
298 files = []
299 mf = self.repo.manifest.read(changes[0])
299 mf = self.repo.manifest.read(changes[0])
300 for f in changes[3]:
300 for f in changes[3]:
301 files.append(self.t("filenodelink",
301 files.append(self.t("filenodelink",
302 filenode=hex(mf.get(f, nullid)), file=f))
302 filenode=hex(mf.get(f, nullid)), file=f))
303
303
304 def diff(**map):
304 def diff(**map):
305 yield self.diff(p1, n, None)
305 yield self.diff(p1, n, None)
306
306
307 yield self.t('changeset',
307 yield self.t('changeset',
308 diff=diff,
308 diff=diff,
309 rev=cl.rev(n),
309 rev=cl.rev(n),
310 node=nodeid,
310 node=nodeid,
311 parent=self.siblings(cl.parents(n), cl.rev),
311 parent=self.siblings(cl.parents(n), cl.rev),
312 child=self.siblings(cl.children(n), cl.rev),
312 child=self.siblings(cl.children(n), cl.rev),
313 changesettag=self.showtag("changesettag",n),
313 changesettag=self.showtag("changesettag",n),
314 manifest=hex(changes[0]),
314 manifest=hex(changes[0]),
315 author=changes[1],
315 author=changes[1],
316 desc=changes[4],
316 desc=changes[4],
317 date=changes[2],
317 date=changes[2],
318 files=files,
318 files=files,
319 archives=self.archivelist(nodeid))
319 archives=self.archivelist(nodeid))
320
320
321 def filelog(self, f, filenode):
321 def filelog(self, f, filenode):
322 cl = self.repo.changelog
322 cl = self.repo.changelog
323 fl = self.repo.file(f)
323 fl = self.repo.file(f)
324 filenode = hex(fl.lookup(filenode))
324 filenode = hex(fl.lookup(filenode))
325 count = fl.count()
325 count = fl.count()
326
326
327 def entries(**map):
327 def entries(**map):
328 l = []
328 l = []
329 parity = (count - 1) & 1
329 parity = (count - 1) & 1
330
330
331 for i in range(count):
331 for i in range(count):
332 n = fl.node(i)
332 n = fl.node(i)
333 lr = fl.linkrev(n)
333 lr = fl.linkrev(n)
334 cn = cl.node(lr)
334 cn = cl.node(lr)
335 cs = cl.read(cl.node(lr))
335 cs = cl.read(cl.node(lr))
336
336
337 l.insert(0, {"parity": parity,
337 l.insert(0, {"parity": parity,
338 "filenode": hex(n),
338 "filenode": hex(n),
339 "filerev": i,
339 "filerev": i,
340 "file": f,
340 "file": f,
341 "node": hex(cn),
341 "node": hex(cn),
342 "author": cs[1],
342 "author": cs[1],
343 "date": cs[2],
343 "date": cs[2],
344 "rename": self.renamelink(fl, n),
344 "rename": self.renamelink(fl, n),
345 "parent": self.siblings(fl.parents(n),
345 "parent": self.siblings(fl.parents(n),
346 fl.rev, file=f),
346 fl.rev, file=f),
347 "child": self.siblings(fl.children(n),
347 "child": self.siblings(fl.children(n),
348 fl.rev, file=f),
348 fl.rev, file=f),
349 "desc": cs[4]})
349 "desc": cs[4]})
350 parity = 1 - parity
350 parity = 1 - parity
351
351
352 for e in l:
352 for e in l:
353 yield e
353 yield e
354
354
355 yield self.t("filelog", file=f, filenode=filenode, entries=entries)
355 yield self.t("filelog", file=f, filenode=filenode, entries=entries)
356
356
357 def filerevision(self, f, node):
357 def filerevision(self, f, node):
358 fl = self.repo.file(f)
358 fl = self.repo.file(f)
359 n = fl.lookup(node)
359 n = fl.lookup(node)
360 node = hex(n)
360 node = hex(n)
361 text = fl.read(n)
361 text = fl.read(n)
362 changerev = fl.linkrev(n)
362 changerev = fl.linkrev(n)
363 cl = self.repo.changelog
363 cl = self.repo.changelog
364 cn = cl.node(changerev)
364 cn = cl.node(changerev)
365 cs = cl.read(cn)
365 cs = cl.read(cn)
366 mfn = cs[0]
366 mfn = cs[0]
367
367
368 mt = mimetypes.guess_type(f)[0]
368 mt = mimetypes.guess_type(f)[0]
369 rawtext = text
369 rawtext = text
370 if util.binary(text):
370 if util.binary(text):
371 mt = mt or 'application/octet-stream'
371 mt = mt or 'application/octet-stream'
372 text = "(binary:%s)" % mt
372 text = "(binary:%s)" % mt
373 mt = mt or 'text/plain'
373 mt = mt or 'text/plain'
374
374
375 def lines():
375 def lines():
376 for l, t in enumerate(text.splitlines(1)):
376 for l, t in enumerate(text.splitlines(1)):
377 yield {"line": t,
377 yield {"line": t,
378 "linenumber": "% 6d" % (l + 1),
378 "linenumber": "% 6d" % (l + 1),
379 "parity": l & 1}
379 "parity": l & 1}
380
380
381 yield self.t("filerevision",
381 yield self.t("filerevision",
382 file=f,
382 file=f,
383 filenode=node,
383 filenode=node,
384 path=_up(f),
384 path=_up(f),
385 text=lines(),
385 text=lines(),
386 raw=rawtext,
386 raw=rawtext,
387 mimetype=mt,
387 mimetype=mt,
388 rev=changerev,
388 rev=changerev,
389 node=hex(cn),
389 node=hex(cn),
390 manifest=hex(mfn),
390 manifest=hex(mfn),
391 author=cs[1],
391 author=cs[1],
392 date=cs[2],
392 date=cs[2],
393 parent=self.siblings(fl.parents(n), fl.rev, file=f),
393 parent=self.siblings(fl.parents(n), fl.rev, file=f),
394 child=self.siblings(fl.children(n), fl.rev, file=f),
394 child=self.siblings(fl.children(n), fl.rev, file=f),
395 rename=self.renamelink(fl, n),
395 rename=self.renamelink(fl, n),
396 permissions=self.repo.manifest.readflags(mfn)[f])
396 permissions=self.repo.manifest.readflags(mfn)[f])
397
397
398 def fileannotate(self, f, node):
398 def fileannotate(self, f, node):
399 bcache = {}
399 bcache = {}
400 ncache = {}
400 ncache = {}
401 fl = self.repo.file(f)
401 fl = self.repo.file(f)
402 n = fl.lookup(node)
402 n = fl.lookup(node)
403 node = hex(n)
403 node = hex(n)
404 changerev = fl.linkrev(n)
404 changerev = fl.linkrev(n)
405
405
406 cl = self.repo.changelog
406 cl = self.repo.changelog
407 cn = cl.node(changerev)
407 cn = cl.node(changerev)
408 cs = cl.read(cn)
408 cs = cl.read(cn)
409 mfn = cs[0]
409 mfn = cs[0]
410
410
411 def annotate(**map):
411 def annotate(**map):
412 parity = 1
412 parity = 1
413 last = None
413 last = None
414 for r, l in fl.annotate(n):
414 for r, l in fl.annotate(n):
415 try:
415 try:
416 cnode = ncache[r]
416 cnode = ncache[r]
417 except KeyError:
417 except KeyError:
418 cnode = ncache[r] = self.repo.changelog.node(r)
418 cnode = ncache[r] = self.repo.changelog.node(r)
419
419
420 try:
420 try:
421 name = bcache[r]
421 name = bcache[r]
422 except KeyError:
422 except KeyError:
423 cl = self.repo.changelog.read(cnode)
423 cl = self.repo.changelog.read(cnode)
424 bcache[r] = name = self.repo.ui.shortuser(cl[1])
424 bcache[r] = name = self.repo.ui.shortuser(cl[1])
425
425
426 if last != cnode:
426 if last != cnode:
427 parity = 1 - parity
427 parity = 1 - parity
428 last = cnode
428 last = cnode
429
429
430 yield {"parity": parity,
430 yield {"parity": parity,
431 "node": hex(cnode),
431 "node": hex(cnode),
432 "rev": r,
432 "rev": r,
433 "author": name,
433 "author": name,
434 "file": f,
434 "file": f,
435 "line": l}
435 "line": l}
436
436
437 yield self.t("fileannotate",
437 yield self.t("fileannotate",
438 file=f,
438 file=f,
439 filenode=node,
439 filenode=node,
440 annotate=annotate,
440 annotate=annotate,
441 path=_up(f),
441 path=_up(f),
442 rev=changerev,
442 rev=changerev,
443 node=hex(cn),
443 node=hex(cn),
444 manifest=hex(mfn),
444 manifest=hex(mfn),
445 author=cs[1],
445 author=cs[1],
446 date=cs[2],
446 date=cs[2],
447 rename=self.renamelink(fl, n),
447 rename=self.renamelink(fl, n),
448 parent=self.siblings(fl.parents(n), fl.rev, file=f),
448 parent=self.siblings(fl.parents(n), fl.rev, file=f),
449 child=self.siblings(fl.children(n), fl.rev, file=f),
449 child=self.siblings(fl.children(n), fl.rev, file=f),
450 permissions=self.repo.manifest.readflags(mfn)[f])
450 permissions=self.repo.manifest.readflags(mfn)[f])
451
451
452 def manifest(self, mnode, path):
452 def manifest(self, mnode, path):
453 man = self.repo.manifest
453 man = self.repo.manifest
454 mn = man.lookup(mnode)
454 mn = man.lookup(mnode)
455 mnode = hex(mn)
455 mnode = hex(mn)
456 mf = man.read(mn)
456 mf = man.read(mn)
457 rev = man.rev(mn)
457 rev = man.rev(mn)
458 changerev = man.linkrev(mn)
458 changerev = man.linkrev(mn)
459 node = self.repo.changelog.node(changerev)
459 node = self.repo.changelog.node(changerev)
460 mff = man.readflags(mn)
460 mff = man.readflags(mn)
461
461
462 files = {}
462 files = {}
463
463
464 p = path[1:]
464 p = path[1:]
465 if p and p[-1] != "/":
465 if p and p[-1] != "/":
466 p += "/"
466 p += "/"
467 l = len(p)
467 l = len(p)
468
468
469 for f,n in mf.items():
469 for f,n in mf.items():
470 if f[:l] != p:
470 if f[:l] != p:
471 continue
471 continue
472 remain = f[l:]
472 remain = f[l:]
473 if "/" in remain:
473 if "/" in remain:
474 short = remain[:remain.index("/") + 1] # bleah
474 short = remain[:remain.index("/") + 1] # bleah
475 files[short] = (f, None)
475 files[short] = (f, None)
476 else:
476 else:
477 short = os.path.basename(remain)
477 short = os.path.basename(remain)
478 files[short] = (f, n)
478 files[short] = (f, n)
479
479
480 def filelist(**map):
480 def filelist(**map):
481 parity = 0
481 parity = 0
482 fl = files.keys()
482 fl = files.keys()
483 fl.sort()
483 fl.sort()
484 for f in fl:
484 for f in fl:
485 full, fnode = files[f]
485 full, fnode = files[f]
486 if not fnode:
486 if not fnode:
487 continue
487 continue
488
488
489 yield {"file": full,
489 yield {"file": full,
490 "manifest": mnode,
490 "manifest": mnode,
491 "filenode": hex(fnode),
491 "filenode": hex(fnode),
492 "parity": parity,
492 "parity": parity,
493 "basename": f,
493 "basename": f,
494 "permissions": mff[full]}
494 "permissions": mff[full]}
495 parity = 1 - parity
495 parity = 1 - parity
496
496
497 def dirlist(**map):
497 def dirlist(**map):
498 parity = 0
498 parity = 0
499 fl = files.keys()
499 fl = files.keys()
500 fl.sort()
500 fl.sort()
501 for f in fl:
501 for f in fl:
502 full, fnode = files[f]
502 full, fnode = files[f]
503 if fnode:
503 if fnode:
504 continue
504 continue
505
505
506 yield {"parity": parity,
506 yield {"parity": parity,
507 "path": os.path.join(path, f),
507 "path": os.path.join(path, f),
508 "manifest": mnode,
508 "manifest": mnode,
509 "basename": f[:-1]}
509 "basename": f[:-1]}
510 parity = 1 - parity
510 parity = 1 - parity
511
511
512 yield self.t("manifest",
512 yield self.t("manifest",
513 manifest=mnode,
513 manifest=mnode,
514 rev=rev,
514 rev=rev,
515 node=hex(node),
515 node=hex(node),
516 path=path,
516 path=path,
517 up=_up(path),
517 up=_up(path),
518 fentries=filelist,
518 fentries=filelist,
519 dentries=dirlist,
519 dentries=dirlist,
520 archives=self.archivelist(hex(node)))
520 archives=self.archivelist(hex(node)))
521
521
522 def tags(self):
522 def tags(self):
523 cl = self.repo.changelog
523 cl = self.repo.changelog
524 mf = cl.read(cl.tip())[0]
524 mf = cl.read(cl.tip())[0]
525
525
526 i = self.repo.tagslist()
526 i = self.repo.tagslist()
527 i.reverse()
527 i.reverse()
528
528
529 def entries(notip=False, **map):
529 def entries(notip=False, **map):
530 parity = 0
530 parity = 0
531 for k,n in i:
531 for k,n in i:
532 if notip and k == "tip": continue
532 if notip and k == "tip": continue
533 yield {"parity": parity,
533 yield {"parity": parity,
534 "tag": k,
534 "tag": k,
535 "tagmanifest": hex(cl.read(n)[0]),
535 "tagmanifest": hex(cl.read(n)[0]),
536 "date": cl.read(n)[2],
536 "date": cl.read(n)[2],
537 "node": hex(n)}
537 "node": hex(n)}
538 parity = 1 - parity
538 parity = 1 - parity
539
539
540 yield self.t("tags",
540 yield self.t("tags",
541 manifest=hex(mf),
541 manifest=hex(mf),
542 entries=lambda **x: entries(False, **x),
542 entries=lambda **x: entries(False, **x),
543 entriesnotip=lambda **x: entries(True, **x))
543 entriesnotip=lambda **x: entries(True, **x))
544
544
545 def summary(self):
545 def summary(self):
546 cl = self.repo.changelog
546 cl = self.repo.changelog
547 mf = cl.read(cl.tip())[0]
547 mf = cl.read(cl.tip())[0]
548
548
549 i = self.repo.tagslist()
549 i = self.repo.tagslist()
550 i.reverse()
550 i.reverse()
551
551
552 def tagentries(**map):
552 def tagentries(**map):
553 parity = 0
553 parity = 0
554 count = 0
554 count = 0
555 for k,n in i:
555 for k,n in i:
556 if k == "tip": # skip tip
556 if k == "tip": # skip tip
557 continue;
557 continue;
558
558
559 count += 1
559 count += 1
560 if count > 10: # limit to 10 tags
560 if count > 10: # limit to 10 tags
561 break;
561 break;
562
562
563 c = cl.read(n)
563 c = cl.read(n)
564 m = c[0]
564 m = c[0]
565 t = c[2]
565 t = c[2]
566
566
567 yield self.t("tagentry",
567 yield self.t("tagentry",
568 parity = parity,
568 parity = parity,
569 tag = k,
569 tag = k,
570 node = hex(n),
570 node = hex(n),
571 date = t,
571 date = t,
572 tagmanifest = hex(m))
572 tagmanifest = hex(m))
573 parity = 1 - parity
573 parity = 1 - parity
574
574
575 def changelist(**map):
575 def changelist(**map):
576 parity = 0
576 parity = 0
577 cl = self.repo.changelog
577 cl = self.repo.changelog
578 l = [] # build a list in forward order for efficiency
578 l = [] # build a list in forward order for efficiency
579 for i in range(start, end):
579 for i in range(start, end):
580 n = cl.node(i)
580 n = cl.node(i)
581 changes = cl.read(n)
581 changes = cl.read(n)
582 hn = hex(n)
582 hn = hex(n)
583 t = changes[2]
583 t = changes[2]
584
584
585 l.insert(0, self.t(
585 l.insert(0, self.t(
586 'shortlogentry',
586 'shortlogentry',
587 parity = parity,
587 parity = parity,
588 author = changes[1],
588 author = changes[1],
589 manifest = hex(changes[0]),
589 manifest = hex(changes[0]),
590 desc = changes[4],
590 desc = changes[4],
591 date = t,
591 date = t,
592 rev = i,
592 rev = i,
593 node = hn))
593 node = hn))
594 parity = 1 - parity
594 parity = 1 - parity
595
595
596 yield l
596 yield l
597
597
598 cl = self.repo.changelog
598 cl = self.repo.changelog
599 mf = cl.read(cl.tip())[0]
599 mf = cl.read(cl.tip())[0]
600 count = cl.count()
600 count = cl.count()
601 start = max(0, count - self.maxchanges)
601 start = max(0, count - self.maxchanges)
602 end = min(count, start + self.maxchanges)
602 end = min(count, start + self.maxchanges)
603
603
604 yield self.t("summary",
604 yield self.t("summary",
605 desc = self.repo.ui.config("web", "description", "unknown"),
605 desc = self.repo.ui.config("web", "description", "unknown"),
606 owner = (self.repo.ui.config("ui", "username") or # preferred
606 owner = (self.repo.ui.config("ui", "username") or # preferred
607 self.repo.ui.config("web", "contact") or # deprecated
607 self.repo.ui.config("web", "contact") or # deprecated
608 self.repo.ui.config("web", "author", "unknown")), # also
608 self.repo.ui.config("web", "author", "unknown")), # also
609 lastchange = (0, 0), # FIXME
609 lastchange = (0, 0), # FIXME
610 manifest = hex(mf),
610 manifest = hex(mf),
611 tags = tagentries,
611 tags = tagentries,
612 shortlog = changelist)
612 shortlog = changelist)
613
613
614 def filediff(self, file, changeset):
614 def filediff(self, file, changeset):
615 cl = self.repo.changelog
615 cl = self.repo.changelog
616 n = self.repo.lookup(changeset)
616 n = self.repo.lookup(changeset)
617 changeset = hex(n)
617 changeset = hex(n)
618 p1 = cl.parents(n)[0]
618 p1 = cl.parents(n)[0]
619 cs = cl.read(n)
619 cs = cl.read(n)
620 mf = self.repo.manifest.read(cs[0])
620 mf = self.repo.manifest.read(cs[0])
621
621
622 def diff(**map):
622 def diff(**map):
623 yield self.diff(p1, n, [file])
623 yield self.diff(p1, n, [file])
624
624
625 yield self.t("filediff",
625 yield self.t("filediff",
626 file=file,
626 file=file,
627 filenode=hex(mf.get(file, nullid)),
627 filenode=hex(mf.get(file, nullid)),
628 node=changeset,
628 node=changeset,
629 rev=self.repo.changelog.rev(n),
629 rev=self.repo.changelog.rev(n),
630 parent=self.siblings(cl.parents(n), cl.rev),
630 parent=self.siblings(cl.parents(n), cl.rev),
631 child=self.siblings(cl.children(n), cl.rev),
631 child=self.siblings(cl.children(n), cl.rev),
632 diff=diff)
632 diff=diff)
633
633
634 archive_specs = {
634 archive_specs = {
635 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None),
635 'bz2': ('application/x-tar', 'tbz2', '.tar.bz2', None),
636 'gz': ('application/x-tar', 'tgz', '.tar.gz', None),
636 'gz': ('application/x-tar', 'tgz', '.tar.gz', None),
637 'zip': ('application/zip', 'zip', '.zip', None),
637 'zip': ('application/zip', 'zip', '.zip', None),
638 }
638 }
639
639
640 def archive(self, req, cnode, type_):
640 def archive(self, req, cnode, type_):
641 reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame))
641 reponame = re.sub(r"\W+", "-", os.path.basename(self.reponame))
642 name = "%s-%s" % (reponame, short(cnode))
642 name = "%s-%s" % (reponame, short(cnode))
643 mimetype, artype, extension, encoding = self.archive_specs[type_]
643 mimetype, artype, extension, encoding = self.archive_specs[type_]
644 headers = [('Content-type', mimetype),
644 headers = [('Content-type', mimetype),
645 ('Content-disposition', 'attachment; filename=%s%s' %
645 ('Content-disposition', 'attachment; filename=%s%s' %
646 (name, extension))]
646 (name, extension))]
647 if encoding:
647 if encoding:
648 headers.append(('Content-encoding', encoding))
648 headers.append(('Content-encoding', encoding))
649 req.header(headers)
649 req.header(headers)
650 archival.archive(self.repo, req.out, cnode, artype, prefix=name)
650 archival.archive(self.repo, req.out, cnode, artype, prefix=name)
651
651
652 # add tags to things
652 # add tags to things
653 # tags -> list of changesets corresponding to tags
653 # tags -> list of changesets corresponding to tags
654 # find tag, changeset, file
654 # find tag, changeset, file
655
655
656 def cleanpath(self, path):
656 def cleanpath(self, path):
657 p = util.normpath(path)
657 p = util.normpath(path)
658 if p[:2] == "..":
658 if p[:2] == "..":
659 raise Exception("suspicious path")
659 raise Exception("suspicious path")
660 return p
660 return p
661
661
662 def run(self):
662 def run(self):
663 if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."):
663 if not os.environ.get('GATEWAY_INTERFACE', '').startswith("CGI/1."):
664 raise RuntimeError("This function is only intended to be called while running as a CGI script.")
664 raise RuntimeError("This function is only intended to be called while running as a CGI script.")
665 import mercurial.hgweb.wsgicgi as wsgicgi
665 import mercurial.hgweb.wsgicgi as wsgicgi
666 from request import wsgiapplication
666 from request import wsgiapplication
667 def make_web_app():
667 def make_web_app():
668 return self
668 return self
669 wsgicgi.launch(wsgiapplication(make_web_app))
669 wsgicgi.launch(wsgiapplication(make_web_app))
670
670
671 def run_wsgi(self, req):
671 def run_wsgi(self, req):
672 def header(**map):
672 def header(**map):
673 header_file = cStringIO.StringIO(''.join(self.t("header", **map)))
673 header_file = cStringIO.StringIO(''.join(self.t("header", **map)))
674 msg = mimetools.Message(header_file, 0)
674 msg = mimetools.Message(header_file, 0)
675 req.header(msg.items())
675 req.header(msg.items())
676 yield header_file.read()
676 yield header_file.read()
677
677
678 def rawfileheader(**map):
678 def rawfileheader(**map):
679 req.header([('Content-type', map['mimetype']),
679 req.header([('Content-type', map['mimetype']),
680 ('Content-disposition', 'filename=%s' % map['file']),
680 ('Content-disposition', 'filename=%s' % map['file']),
681 ('Content-length', str(len(map['raw'])))])
681 ('Content-length', str(len(map['raw'])))])
682 yield ''
682 yield ''
683
683
684 def footer(**map):
684 def footer(**map):
685 yield self.t("footer",
685 yield self.t("footer",
686 motd=self.repo.ui.config("web", "motd", ""),
686 motd=self.repo.ui.config("web", "motd", ""),
687 **map)
687 **map)
688
688
689 def expand_form(form):
689 def expand_form(form):
690 shortcuts = {
690 shortcuts = {
691 'cl': [('cmd', ['changelog']), ('rev', None)],
691 'cl': [('cmd', ['changelog']), ('rev', None)],
692 'cs': [('cmd', ['changeset']), ('node', None)],
692 'cs': [('cmd', ['changeset']), ('node', None)],
693 'f': [('cmd', ['file']), ('filenode', None)],
693 'f': [('cmd', ['file']), ('filenode', None)],
694 'fl': [('cmd', ['filelog']), ('filenode', None)],
694 'fl': [('cmd', ['filelog']), ('filenode', None)],
695 'fd': [('cmd', ['filediff']), ('node', None)],
695 'fd': [('cmd', ['filediff']), ('node', None)],
696 'fa': [('cmd', ['annotate']), ('filenode', None)],
696 'fa': [('cmd', ['annotate']), ('filenode', None)],
697 'mf': [('cmd', ['manifest']), ('manifest', None)],
697 'mf': [('cmd', ['manifest']), ('manifest', None)],
698 'ca': [('cmd', ['archive']), ('node', None)],
698 'ca': [('cmd', ['archive']), ('node', None)],
699 'tags': [('cmd', ['tags'])],
699 'tags': [('cmd', ['tags'])],
700 'tip': [('cmd', ['changeset']), ('node', ['tip'])],
700 'tip': [('cmd', ['changeset']), ('node', ['tip'])],
701 'static': [('cmd', ['static']), ('file', None)]
701 'static': [('cmd', ['static']), ('file', None)]
702 }
702 }
703
703
704 for k in shortcuts.iterkeys():
704 for k in shortcuts.iterkeys():
705 if form.has_key(k):
705 if form.has_key(k):
706 for name, value in shortcuts[k]:
706 for name, value in shortcuts[k]:
707 if value is None:
707 if value is None:
708 value = form[k]
708 value = form[k]
709 form[name] = value
709 form[name] = value
710 del form[k]
710 del form[k]
711
711
712 self.refresh()
712 self.refresh()
713
713
714 expand_form(req.form)
714 expand_form(req.form)
715
715
716 m = os.path.join(self.templatepath, "map")
716 m = os.path.join(self.templatepath, "map")
717 style = self.repo.ui.config("web", "style", "")
717 style = self.repo.ui.config("web", "style", "")
718 if req.form.has_key('style'):
718 if req.form.has_key('style'):
719 style = req.form['style'][0]
719 style = req.form['style'][0]
720 if style:
720 if style:
721 b = os.path.basename("map-" + style)
721 b = os.path.basename("map-" + style)
722 p = os.path.join(self.templatepath, b)
722 p = os.path.join(self.templatepath, b)
723 if os.path.isfile(p):
723 if os.path.isfile(p):
724 m = p
724 m = p
725
725
726 port = req.env["SERVER_PORT"]
726 port = req.env["SERVER_PORT"]
727 port = port != "80" and (":" + port) or ""
727 port = port != "80" and (":" + port) or ""
728 uri = req.env["REQUEST_URI"]
728 uri = req.env["REQUEST_URI"]
729 if "?" in uri:
729 if "?" in uri:
730 uri = uri.split("?")[0]
730 uri = uri.split("?")[0]
731 url = "http://%s%s%s" % (req.env["SERVER_NAME"], port, uri)
731 url = "http://%s%s%s" % (req.env["SERVER_NAME"], port, uri)
732 if not self.reponame:
732 if not self.reponame:
733 self.reponame = (self.repo.ui.config("web", "name")
733 self.reponame = (self.repo.ui.config("web", "name")
734 or uri.strip('/') or self.repo.root)
734 or uri.strip('/') or self.repo.root)
735
735
736 self.t = templater.templater(m, templater.common_filters,
736 self.t = templater.templater(m, templater.common_filters,
737 defaults={"url": url,
737 defaults={"url": url,
738 "repo": self.reponame,
738 "repo": self.reponame,
739 "header": header,
739 "header": header,
740 "footer": footer,
740 "footer": footer,
741 "rawfileheader": rawfileheader,
741 "rawfileheader": rawfileheader,
742 })
742 })
743
743
744 if not req.form.has_key('cmd'):
744 if not req.form.has_key('cmd'):
745 req.form['cmd'] = [self.t.cache['default'],]
745 req.form['cmd'] = [self.t.cache['default'],]
746
746
747 cmd = req.form['cmd'][0]
747 cmd = req.form['cmd'][0]
748
748
749 method = getattr(self, 'do_' + cmd, None)
749 method = getattr(self, 'do_' + cmd, None)
750 if method:
750 if method:
751 method(req)
751 method(req)
752 else:
752 else:
753 req.write(self.t("error"))
753 req.write(self.t("error"))
754
754
755 def do_changelog(self, req):
755 def do_changelog(self, req):
756 hi = self.repo.changelog.count() - 1
756 hi = self.repo.changelog.count() - 1
757 if req.form.has_key('rev'):
757 if req.form.has_key('rev'):
758 hi = req.form['rev'][0]
758 hi = req.form['rev'][0]
759 try:
759 try:
760 hi = self.repo.changelog.rev(self.repo.lookup(hi))
760 hi = self.repo.changelog.rev(self.repo.lookup(hi))
761 except hg.RepoError:
761 except hg.RepoError:
762 req.write(self.search(hi)) # XXX redirect to 404 page?
762 req.write(self.search(hi)) # XXX redirect to 404 page?
763 return
763 return
764
764
765 req.write(self.changelog(hi))
765 req.write(self.changelog(hi))
766
766
767 def do_changeset(self, req):
767 def do_changeset(self, req):
768 req.write(self.changeset(req.form['node'][0]))
768 req.write(self.changeset(req.form['node'][0]))
769
769
770 def do_manifest(self, req):
770 def do_manifest(self, req):
771 req.write(self.manifest(req.form['manifest'][0],
771 req.write(self.manifest(req.form['manifest'][0],
772 self.cleanpath(req.form['path'][0])))
772 self.cleanpath(req.form['path'][0])))
773
773
774 def do_tags(self, req):
774 def do_tags(self, req):
775 req.write(self.tags())
775 req.write(self.tags())
776
776
777 def do_summary(self, req):
777 def do_summary(self, req):
778 req.write(self.summary())
778 req.write(self.summary())
779
779
780 def do_filediff(self, req):
780 def do_filediff(self, req):
781 req.write(self.filediff(self.cleanpath(req.form['file'][0]),
781 req.write(self.filediff(self.cleanpath(req.form['file'][0]),
782 req.form['node'][0]))
782 req.form['node'][0]))
783
783
784 def do_file(self, req):
784 def do_file(self, req):
785 req.write(self.filerevision(self.cleanpath(req.form['file'][0]),
785 req.write(self.filerevision(self.cleanpath(req.form['file'][0]),
786 req.form['filenode'][0]))
786 req.form['filenode'][0]))
787
787
788 def do_annotate(self, req):
788 def do_annotate(self, req):
789 req.write(self.fileannotate(self.cleanpath(req.form['file'][0]),
789 req.write(self.fileannotate(self.cleanpath(req.form['file'][0]),
790 req.form['filenode'][0]))
790 req.form['filenode'][0]))
791
791
792 def do_filelog(self, req):
792 def do_filelog(self, req):
793 req.write(self.filelog(self.cleanpath(req.form['file'][0]),
793 req.write(self.filelog(self.cleanpath(req.form['file'][0]),
794 req.form['filenode'][0]))
794 req.form['filenode'][0]))
795
795
796 def do_heads(self, req):
796 def do_heads(self, req):
797 resp = " ".join(map(hex, self.repo.heads())) + "\n"
797 resp = " ".join(map(hex, self.repo.heads())) + "\n"
798 req.httphdr("application/mercurial-0.1", length=len(resp))
798 req.httphdr("application/mercurial-0.1", length=len(resp))
799 req.write(resp)
799 req.write(resp)
800
800
801 def do_branches(self, req):
801 def do_branches(self, req):
802 nodes = []
802 nodes = []
803 if req.form.has_key('nodes'):
803 if req.form.has_key('nodes'):
804 nodes = map(bin, req.form['nodes'][0].split(" "))
804 nodes = map(bin, req.form['nodes'][0].split(" "))
805 resp = cStringIO.StringIO()
805 resp = cStringIO.StringIO()
806 for b in self.repo.branches(nodes):
806 for b in self.repo.branches(nodes):
807 resp.write(" ".join(map(hex, b)) + "\n")
807 resp.write(" ".join(map(hex, b)) + "\n")
808 resp = resp.getvalue()
808 resp = resp.getvalue()
809 req.httphdr("application/mercurial-0.1", length=len(resp))
809 req.httphdr("application/mercurial-0.1", length=len(resp))
810 req.write(resp)
810 req.write(resp)
811
811
812 def do_between(self, req):
812 def do_between(self, req):
813 nodes = []
813 nodes = []
814 if req.form.has_key('pairs'):
814 if req.form.has_key('pairs'):
815 pairs = [map(bin, p.split("-"))
815 pairs = [map(bin, p.split("-"))
816 for p in req.form['pairs'][0].split(" ")]
816 for p in req.form['pairs'][0].split(" ")]
817 resp = cStringIO.StringIO()
817 resp = cStringIO.StringIO()
818 for b in self.repo.between(pairs):
818 for b in self.repo.between(pairs):
819 resp.write(" ".join(map(hex, b)) + "\n")
819 resp.write(" ".join(map(hex, b)) + "\n")
820 resp = resp.getvalue()
820 resp = resp.getvalue()
821 req.httphdr("application/mercurial-0.1", length=len(resp))
821 req.httphdr("application/mercurial-0.1", length=len(resp))
822 req.write(resp)
822 req.write(resp)
823
823
824 def do_changegroup(self, req):
824 def do_changegroup(self, req):
825 req.httphdr("application/mercurial-0.1")
825 req.httphdr("application/mercurial-0.1")
826 nodes = []
826 nodes = []
827 if not self.allowpull:
827 if not self.allowpull:
828 return
828 return
829
829
830 if req.form.has_key('roots'):
830 if req.form.has_key('roots'):
831 nodes = map(bin, req.form['roots'][0].split(" "))
831 nodes = map(bin, req.form['roots'][0].split(" "))
832
832
833 z = zlib.compressobj()
833 z = zlib.compressobj()
834 f = self.repo.changegroup(nodes, 'serve')
834 f = self.repo.changegroup(nodes, 'serve')
835 while 1:
835 while 1:
836 chunk = f.read(4096)
836 chunk = f.read(4096)
837 if not chunk:
837 if not chunk:
838 break
838 break
839 req.write(z.compress(chunk))
839 req.write(z.compress(chunk))
840
840
841 req.write(z.flush())
841 req.write(z.flush())
842
842
843 def do_archive(self, req):
843 def do_archive(self, req):
844 changeset = self.repo.lookup(req.form['node'][0])
844 changeset = self.repo.lookup(req.form['node'][0])
845 type_ = req.form['type'][0]
845 type_ = req.form['type'][0]
846 allowed = self.repo.ui.configlist("web", "allow_archive")
846 allowed = self.repo.ui.configlist("web", "allow_archive")
847 if (type_ in self.archives and (type_ in allowed or
847 if (type_ in self.archives and (type_ in allowed or
848 self.repo.ui.configbool("web", "allow" + type_, False))):
848 self.repo.ui.configbool("web", "allow" + type_, False))):
849 self.archive(req, changeset, type_)
849 self.archive(req, changeset, type_)
850 return
850 return
851
851
852 req.write(self.t("error"))
852 req.write(self.t("error"))
853
853
854 def do_static(self, req):
854 def do_static(self, req):
855 fname = req.form['file'][0]
855 fname = req.form['file'][0]
856 static = self.repo.ui.config("web", "static",
856 static = self.repo.ui.config("web", "static",
857 os.path.join(self.templatepath,
857 os.path.join(self.templatepath,
858 "static"))
858 "static"))
859 req.write(staticfile(static, fname, req)
859 req.write(staticfile(static, fname, req)
860 or self.t("error", error="%r not found" % fname))
860 or self.t("error", error="%r not found" % fname))
861
861
862 def do_capabilities(self, req):
862 def do_capabilities(self, req):
863 resp = 'unbundle stream=%d' % (self.repo.revlogversion,)
863 caps = ['unbundle']
864 if self.repo.ui.configbool('server', 'uncompressed'):
865 caps.append('stream=%d' % self.repo.revlogversion)
866 resp = ' '.join(caps)
864 req.httphdr("application/mercurial-0.1", length=len(resp))
867 req.httphdr("application/mercurial-0.1", length=len(resp))
865 req.write(resp)
868 req.write(resp)
866
869
867 def check_perm(self, req, op, default):
870 def check_perm(self, req, op, default):
868 '''check permission for operation based on user auth.
871 '''check permission for operation based on user auth.
869 return true if op allowed, else false.
872 return true if op allowed, else false.
870 default is policy to use if no config given.'''
873 default is policy to use if no config given.'''
871
874
872 user = req.env.get('REMOTE_USER')
875 user = req.env.get('REMOTE_USER')
873
876
874 deny = self.repo.ui.configlist('web', 'deny_' + op)
877 deny = self.repo.ui.configlist('web', 'deny_' + op)
875 if deny and (not user or deny == ['*'] or user in deny):
878 if deny and (not user or deny == ['*'] or user in deny):
876 return False
879 return False
877
880
878 allow = self.repo.ui.configlist('web', 'allow_' + op)
881 allow = self.repo.ui.configlist('web', 'allow_' + op)
879 return (allow and (allow == ['*'] or user in allow)) or default
882 return (allow and (allow == ['*'] or user in allow)) or default
880
883
881 def do_unbundle(self, req):
884 def do_unbundle(self, req):
882 def bail(response, headers={}):
885 def bail(response, headers={}):
883 length = int(req.env['CONTENT_LENGTH'])
886 length = int(req.env['CONTENT_LENGTH'])
884 for s in util.filechunkiter(req, limit=length):
887 for s in util.filechunkiter(req, limit=length):
885 # drain incoming bundle, else client will not see
888 # drain incoming bundle, else client will not see
886 # response when run outside cgi script
889 # response when run outside cgi script
887 pass
890 pass
888 req.httphdr("application/mercurial-0.1", headers=headers)
891 req.httphdr("application/mercurial-0.1", headers=headers)
889 req.write('0\n')
892 req.write('0\n')
890 req.write(response)
893 req.write(response)
891
894
892 # require ssl by default, auth info cannot be sniffed and
895 # require ssl by default, auth info cannot be sniffed and
893 # replayed
896 # replayed
894 ssl_req = self.repo.ui.configbool('web', 'push_ssl', True)
897 ssl_req = self.repo.ui.configbool('web', 'push_ssl', True)
895 if ssl_req and not req.env.get('HTTPS'):
898 if ssl_req and not req.env.get('HTTPS'):
896 bail(_('ssl required\n'))
899 bail(_('ssl required\n'))
897 return
900 return
898
901
899 # do not allow push unless explicitly allowed
902 # do not allow push unless explicitly allowed
900 if not self.check_perm(req, 'push', False):
903 if not self.check_perm(req, 'push', False):
901 bail(_('push not authorized\n'),
904 bail(_('push not authorized\n'),
902 headers={'status': '401 Unauthorized'})
905 headers={'status': '401 Unauthorized'})
903 return
906 return
904
907
905 req.httphdr("application/mercurial-0.1")
908 req.httphdr("application/mercurial-0.1")
906
909
907 their_heads = req.form['heads'][0].split(' ')
910 their_heads = req.form['heads'][0].split(' ')
908
911
909 def check_heads():
912 def check_heads():
910 heads = map(hex, self.repo.heads())
913 heads = map(hex, self.repo.heads())
911 return their_heads == [hex('force')] or their_heads == heads
914 return their_heads == [hex('force')] or their_heads == heads
912
915
913 # fail early if possible
916 # fail early if possible
914 if not check_heads():
917 if not check_heads():
915 bail(_('unsynced changes\n'))
918 bail(_('unsynced changes\n'))
916 return
919 return
917
920
918 # do not lock repo until all changegroup data is
921 # do not lock repo until all changegroup data is
919 # streamed. save to temporary file.
922 # streamed. save to temporary file.
920
923
921 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
924 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
922 fp = os.fdopen(fd, 'wb+')
925 fp = os.fdopen(fd, 'wb+')
923 try:
926 try:
924 length = int(req.env['CONTENT_LENGTH'])
927 length = int(req.env['CONTENT_LENGTH'])
925 for s in util.filechunkiter(req, limit=length):
928 for s in util.filechunkiter(req, limit=length):
926 fp.write(s)
929 fp.write(s)
927
930
928 lock = self.repo.lock()
931 lock = self.repo.lock()
929 try:
932 try:
930 if not check_heads():
933 if not check_heads():
931 req.write('0\n')
934 req.write('0\n')
932 req.write(_('unsynced changes\n'))
935 req.write(_('unsynced changes\n'))
933 return
936 return
934
937
935 fp.seek(0)
938 fp.seek(0)
936
939
937 # send addchangegroup output to client
940 # send addchangegroup output to client
938
941
939 old_stdout = sys.stdout
942 old_stdout = sys.stdout
940 sys.stdout = cStringIO.StringIO()
943 sys.stdout = cStringIO.StringIO()
941
944
942 try:
945 try:
943 ret = self.repo.addchangegroup(fp, 'serve')
946 ret = self.repo.addchangegroup(fp, 'serve')
944 finally:
947 finally:
945 val = sys.stdout.getvalue()
948 val = sys.stdout.getvalue()
946 sys.stdout = old_stdout
949 sys.stdout = old_stdout
947 req.write('%d\n' % ret)
950 req.write('%d\n' % ret)
948 req.write(val)
951 req.write(val)
949 finally:
952 finally:
950 lock.release()
953 lock.release()
951 finally:
954 finally:
952 fp.close()
955 fp.close()
953 os.unlink(tempname)
956 os.unlink(tempname)
954
957
955 def do_stream_out(self, req):
958 def do_stream_out(self, req):
956 req.httphdr("application/mercurial-0.1")
959 req.httphdr("application/mercurial-0.1")
957 streamclone.stream_out(self.repo, req)
960 streamclone.stream_out(self.repo, req)
@@ -1,2254 +1,2258 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import *
8 from node import *
9 from i18n import gettext as _
9 from i18n import gettext as _
10 from demandload import *
10 from demandload import *
11 import repo
11 import repo
12 demandload(globals(), "appendfile changegroup")
12 demandload(globals(), "appendfile changegroup")
13 demandload(globals(), "changelog dirstate filelog manifest context")
13 demandload(globals(), "changelog dirstate filelog manifest context")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
14 demandload(globals(), "re lock transaction tempfile stat mdiff errno ui")
15 demandload(globals(), "os revlog time util")
15 demandload(globals(), "os revlog time util")
16
16
17 class localrepository(repo.repository):
17 class localrepository(repo.repository):
18 capabilities = ()
18 capabilities = ()
19
19
20 def __del__(self):
20 def __del__(self):
21 self.transhandle = None
21 self.transhandle = None
22 def __init__(self, parentui, path=None, create=0):
22 def __init__(self, parentui, path=None, create=0):
23 repo.repository.__init__(self)
23 repo.repository.__init__(self)
24 if not path:
24 if not path:
25 p = os.getcwd()
25 p = os.getcwd()
26 while not os.path.isdir(os.path.join(p, ".hg")):
26 while not os.path.isdir(os.path.join(p, ".hg")):
27 oldp = p
27 oldp = p
28 p = os.path.dirname(p)
28 p = os.path.dirname(p)
29 if p == oldp:
29 if p == oldp:
30 raise repo.RepoError(_("no repo found"))
30 raise repo.RepoError(_("no repo found"))
31 path = p
31 path = p
32 self.path = os.path.join(path, ".hg")
32 self.path = os.path.join(path, ".hg")
33
33
34 if not create and not os.path.isdir(self.path):
34 if not create and not os.path.isdir(self.path):
35 raise repo.RepoError(_("repository %s not found") % path)
35 raise repo.RepoError(_("repository %s not found") % path)
36
36
37 self.root = os.path.abspath(path)
37 self.root = os.path.abspath(path)
38 self.origroot = path
38 self.origroot = path
39 self.ui = ui.ui(parentui=parentui)
39 self.ui = ui.ui(parentui=parentui)
40 self.opener = util.opener(self.path)
40 self.opener = util.opener(self.path)
41 self.wopener = util.opener(self.root)
41 self.wopener = util.opener(self.root)
42
42
43 try:
43 try:
44 self.ui.readconfig(self.join("hgrc"), self.root)
44 self.ui.readconfig(self.join("hgrc"), self.root)
45 except IOError:
45 except IOError:
46 pass
46 pass
47
47
48 v = self.ui.revlogopts
48 v = self.ui.revlogopts
49 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
49 self.revlogversion = int(v.get('format', revlog.REVLOG_DEFAULT_FORMAT))
50 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
50 self.revlogv1 = self.revlogversion != revlog.REVLOGV0
51 fl = v.get('flags', None)
51 fl = v.get('flags', None)
52 flags = 0
52 flags = 0
53 if fl != None:
53 if fl != None:
54 for x in fl.split():
54 for x in fl.split():
55 flags |= revlog.flagstr(x)
55 flags |= revlog.flagstr(x)
56 elif self.revlogv1:
56 elif self.revlogv1:
57 flags = revlog.REVLOG_DEFAULT_FLAGS
57 flags = revlog.REVLOG_DEFAULT_FLAGS
58
58
59 v = self.revlogversion | flags
59 v = self.revlogversion | flags
60 self.manifest = manifest.manifest(self.opener, v)
60 self.manifest = manifest.manifest(self.opener, v)
61 self.changelog = changelog.changelog(self.opener, v)
61 self.changelog = changelog.changelog(self.opener, v)
62
62
63 # the changelog might not have the inline index flag
63 # the changelog might not have the inline index flag
64 # on. If the format of the changelog is the same as found in
64 # on. If the format of the changelog is the same as found in
65 # .hgrc, apply any flags found in the .hgrc as well.
65 # .hgrc, apply any flags found in the .hgrc as well.
66 # Otherwise, just version from the changelog
66 # Otherwise, just version from the changelog
67 v = self.changelog.version
67 v = self.changelog.version
68 if v == self.revlogversion:
68 if v == self.revlogversion:
69 v |= flags
69 v |= flags
70 self.revlogversion = v
70 self.revlogversion = v
71
71
72 self.tagscache = None
72 self.tagscache = None
73 self.nodetagscache = None
73 self.nodetagscache = None
74 self.encodepats = None
74 self.encodepats = None
75 self.decodepats = None
75 self.decodepats = None
76 self.transhandle = None
76 self.transhandle = None
77
77
78 if create:
78 if create:
79 if not os.path.exists(path):
79 if not os.path.exists(path):
80 os.mkdir(path)
80 os.mkdir(path)
81 os.mkdir(self.path)
81 os.mkdir(self.path)
82 os.mkdir(self.join("data"))
82 os.mkdir(self.join("data"))
83
83
84 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
84 self.dirstate = dirstate.dirstate(self.opener, self.ui, self.root)
85
85
86 def hook(self, name, throw=False, **args):
86 def hook(self, name, throw=False, **args):
87 def callhook(hname, funcname):
87 def callhook(hname, funcname):
88 '''call python hook. hook is callable object, looked up as
88 '''call python hook. hook is callable object, looked up as
89 name in python module. if callable returns "true", hook
89 name in python module. if callable returns "true", hook
90 fails, else passes. if hook raises exception, treated as
90 fails, else passes. if hook raises exception, treated as
91 hook failure. exception propagates if throw is "true".
91 hook failure. exception propagates if throw is "true".
92
92
93 reason for "true" meaning "hook failed" is so that
93 reason for "true" meaning "hook failed" is so that
94 unmodified commands (e.g. mercurial.commands.update) can
94 unmodified commands (e.g. mercurial.commands.update) can
95 be run as hooks without wrappers to convert return values.'''
95 be run as hooks without wrappers to convert return values.'''
96
96
97 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
97 self.ui.note(_("calling hook %s: %s\n") % (hname, funcname))
98 d = funcname.rfind('.')
98 d = funcname.rfind('.')
99 if d == -1:
99 if d == -1:
100 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
100 raise util.Abort(_('%s hook is invalid ("%s" not in a module)')
101 % (hname, funcname))
101 % (hname, funcname))
102 modname = funcname[:d]
102 modname = funcname[:d]
103 try:
103 try:
104 obj = __import__(modname)
104 obj = __import__(modname)
105 except ImportError:
105 except ImportError:
106 try:
106 try:
107 # extensions are loaded with hgext_ prefix
107 # extensions are loaded with hgext_ prefix
108 obj = __import__("hgext_%s" % modname)
108 obj = __import__("hgext_%s" % modname)
109 except ImportError:
109 except ImportError:
110 raise util.Abort(_('%s hook is invalid '
110 raise util.Abort(_('%s hook is invalid '
111 '(import of "%s" failed)') %
111 '(import of "%s" failed)') %
112 (hname, modname))
112 (hname, modname))
113 try:
113 try:
114 for p in funcname.split('.')[1:]:
114 for p in funcname.split('.')[1:]:
115 obj = getattr(obj, p)
115 obj = getattr(obj, p)
116 except AttributeError, err:
116 except AttributeError, err:
117 raise util.Abort(_('%s hook is invalid '
117 raise util.Abort(_('%s hook is invalid '
118 '("%s" is not defined)') %
118 '("%s" is not defined)') %
119 (hname, funcname))
119 (hname, funcname))
120 if not callable(obj):
120 if not callable(obj):
121 raise util.Abort(_('%s hook is invalid '
121 raise util.Abort(_('%s hook is invalid '
122 '("%s" is not callable)') %
122 '("%s" is not callable)') %
123 (hname, funcname))
123 (hname, funcname))
124 try:
124 try:
125 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
125 r = obj(ui=self.ui, repo=self, hooktype=name, **args)
126 except (KeyboardInterrupt, util.SignalInterrupt):
126 except (KeyboardInterrupt, util.SignalInterrupt):
127 raise
127 raise
128 except Exception, exc:
128 except Exception, exc:
129 if isinstance(exc, util.Abort):
129 if isinstance(exc, util.Abort):
130 self.ui.warn(_('error: %s hook failed: %s\n') %
130 self.ui.warn(_('error: %s hook failed: %s\n') %
131 (hname, exc.args[0] % exc.args[1:]))
131 (hname, exc.args[0] % exc.args[1:]))
132 else:
132 else:
133 self.ui.warn(_('error: %s hook raised an exception: '
133 self.ui.warn(_('error: %s hook raised an exception: '
134 '%s\n') % (hname, exc))
134 '%s\n') % (hname, exc))
135 if throw:
135 if throw:
136 raise
136 raise
137 self.ui.print_exc()
137 self.ui.print_exc()
138 return True
138 return True
139 if r:
139 if r:
140 if throw:
140 if throw:
141 raise util.Abort(_('%s hook failed') % hname)
141 raise util.Abort(_('%s hook failed') % hname)
142 self.ui.warn(_('warning: %s hook failed\n') % hname)
142 self.ui.warn(_('warning: %s hook failed\n') % hname)
143 return r
143 return r
144
144
145 def runhook(name, cmd):
145 def runhook(name, cmd):
146 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
146 self.ui.note(_("running hook %s: %s\n") % (name, cmd))
147 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
147 env = dict([('HG_' + k.upper(), v) for k, v in args.iteritems()])
148 r = util.system(cmd, environ=env, cwd=self.root)
148 r = util.system(cmd, environ=env, cwd=self.root)
149 if r:
149 if r:
150 desc, r = util.explain_exit(r)
150 desc, r = util.explain_exit(r)
151 if throw:
151 if throw:
152 raise util.Abort(_('%s hook %s') % (name, desc))
152 raise util.Abort(_('%s hook %s') % (name, desc))
153 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
153 self.ui.warn(_('warning: %s hook %s\n') % (name, desc))
154 return r
154 return r
155
155
156 r = False
156 r = False
157 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
157 hooks = [(hname, cmd) for hname, cmd in self.ui.configitems("hooks")
158 if hname.split(".", 1)[0] == name and cmd]
158 if hname.split(".", 1)[0] == name and cmd]
159 hooks.sort()
159 hooks.sort()
160 for hname, cmd in hooks:
160 for hname, cmd in hooks:
161 if cmd.startswith('python:'):
161 if cmd.startswith('python:'):
162 r = callhook(hname, cmd[7:].strip()) or r
162 r = callhook(hname, cmd[7:].strip()) or r
163 else:
163 else:
164 r = runhook(hname, cmd) or r
164 r = runhook(hname, cmd) or r
165 return r
165 return r
166
166
167 tag_disallowed = ':\r\n'
167 tag_disallowed = ':\r\n'
168
168
169 def tag(self, name, node, local=False, message=None, user=None, date=None):
169 def tag(self, name, node, local=False, message=None, user=None, date=None):
170 '''tag a revision with a symbolic name.
170 '''tag a revision with a symbolic name.
171
171
172 if local is True, the tag is stored in a per-repository file.
172 if local is True, the tag is stored in a per-repository file.
173 otherwise, it is stored in the .hgtags file, and a new
173 otherwise, it is stored in the .hgtags file, and a new
174 changeset is committed with the change.
174 changeset is committed with the change.
175
175
176 keyword arguments:
176 keyword arguments:
177
177
178 local: whether to store tag in non-version-controlled file
178 local: whether to store tag in non-version-controlled file
179 (default False)
179 (default False)
180
180
181 message: commit message to use if committing
181 message: commit message to use if committing
182
182
183 user: name of user to use if committing
183 user: name of user to use if committing
184
184
185 date: date tuple to use if committing'''
185 date: date tuple to use if committing'''
186
186
187 for c in self.tag_disallowed:
187 for c in self.tag_disallowed:
188 if c in name:
188 if c in name:
189 raise util.Abort(_('%r cannot be used in a tag name') % c)
189 raise util.Abort(_('%r cannot be used in a tag name') % c)
190
190
191 self.hook('pretag', throw=True, node=node, tag=name, local=local)
191 self.hook('pretag', throw=True, node=node, tag=name, local=local)
192
192
193 if local:
193 if local:
194 self.opener('localtags', 'a').write('%s %s\n' % (node, name))
194 self.opener('localtags', 'a').write('%s %s\n' % (node, name))
195 self.hook('tag', node=node, tag=name, local=local)
195 self.hook('tag', node=node, tag=name, local=local)
196 return
196 return
197
197
198 for x in self.changes():
198 for x in self.changes():
199 if '.hgtags' in x:
199 if '.hgtags' in x:
200 raise util.Abort(_('working copy of .hgtags is changed '
200 raise util.Abort(_('working copy of .hgtags is changed '
201 '(please commit .hgtags manually)'))
201 '(please commit .hgtags manually)'))
202
202
203 self.wfile('.hgtags', 'ab').write('%s %s\n' % (node, name))
203 self.wfile('.hgtags', 'ab').write('%s %s\n' % (node, name))
204 if self.dirstate.state('.hgtags') == '?':
204 if self.dirstate.state('.hgtags') == '?':
205 self.add(['.hgtags'])
205 self.add(['.hgtags'])
206
206
207 if not message:
207 if not message:
208 message = _('Added tag %s for changeset %s') % (name, node)
208 message = _('Added tag %s for changeset %s') % (name, node)
209
209
210 self.commit(['.hgtags'], message, user, date)
210 self.commit(['.hgtags'], message, user, date)
211 self.hook('tag', node=node, tag=name, local=local)
211 self.hook('tag', node=node, tag=name, local=local)
212
212
213 def tags(self):
213 def tags(self):
214 '''return a mapping of tag to node'''
214 '''return a mapping of tag to node'''
215 if not self.tagscache:
215 if not self.tagscache:
216 self.tagscache = {}
216 self.tagscache = {}
217
217
218 def parsetag(line, context):
218 def parsetag(line, context):
219 if not line:
219 if not line:
220 return
220 return
221 s = l.split(" ", 1)
221 s = l.split(" ", 1)
222 if len(s) != 2:
222 if len(s) != 2:
223 self.ui.warn(_("%s: cannot parse entry\n") % context)
223 self.ui.warn(_("%s: cannot parse entry\n") % context)
224 return
224 return
225 node, key = s
225 node, key = s
226 key = key.strip()
226 key = key.strip()
227 try:
227 try:
228 bin_n = bin(node)
228 bin_n = bin(node)
229 except TypeError:
229 except TypeError:
230 self.ui.warn(_("%s: node '%s' is not well formed\n") %
230 self.ui.warn(_("%s: node '%s' is not well formed\n") %
231 (context, node))
231 (context, node))
232 return
232 return
233 if bin_n not in self.changelog.nodemap:
233 if bin_n not in self.changelog.nodemap:
234 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
234 self.ui.warn(_("%s: tag '%s' refers to unknown node\n") %
235 (context, key))
235 (context, key))
236 return
236 return
237 self.tagscache[key] = bin_n
237 self.tagscache[key] = bin_n
238
238
239 # read the tags file from each head, ending with the tip,
239 # read the tags file from each head, ending with the tip,
240 # and add each tag found to the map, with "newer" ones
240 # and add each tag found to the map, with "newer" ones
241 # taking precedence
241 # taking precedence
242 heads = self.heads()
242 heads = self.heads()
243 heads.reverse()
243 heads.reverse()
244 fl = self.file(".hgtags")
244 fl = self.file(".hgtags")
245 for node in heads:
245 for node in heads:
246 change = self.changelog.read(node)
246 change = self.changelog.read(node)
247 rev = self.changelog.rev(node)
247 rev = self.changelog.rev(node)
248 fn, ff = self.manifest.find(change[0], '.hgtags')
248 fn, ff = self.manifest.find(change[0], '.hgtags')
249 if fn is None: continue
249 if fn is None: continue
250 count = 0
250 count = 0
251 for l in fl.read(fn).splitlines():
251 for l in fl.read(fn).splitlines():
252 count += 1
252 count += 1
253 parsetag(l, _(".hgtags (rev %d:%s), line %d") %
253 parsetag(l, _(".hgtags (rev %d:%s), line %d") %
254 (rev, short(node), count))
254 (rev, short(node), count))
255 try:
255 try:
256 f = self.opener("localtags")
256 f = self.opener("localtags")
257 count = 0
257 count = 0
258 for l in f:
258 for l in f:
259 count += 1
259 count += 1
260 parsetag(l, _("localtags, line %d") % count)
260 parsetag(l, _("localtags, line %d") % count)
261 except IOError:
261 except IOError:
262 pass
262 pass
263
263
264 self.tagscache['tip'] = self.changelog.tip()
264 self.tagscache['tip'] = self.changelog.tip()
265
265
266 return self.tagscache
266 return self.tagscache
267
267
268 def tagslist(self):
268 def tagslist(self):
269 '''return a list of tags ordered by revision'''
269 '''return a list of tags ordered by revision'''
270 l = []
270 l = []
271 for t, n in self.tags().items():
271 for t, n in self.tags().items():
272 try:
272 try:
273 r = self.changelog.rev(n)
273 r = self.changelog.rev(n)
274 except:
274 except:
275 r = -2 # sort to the beginning of the list if unknown
275 r = -2 # sort to the beginning of the list if unknown
276 l.append((r, t, n))
276 l.append((r, t, n))
277 l.sort()
277 l.sort()
278 return [(t, n) for r, t, n in l]
278 return [(t, n) for r, t, n in l]
279
279
280 def nodetags(self, node):
280 def nodetags(self, node):
281 '''return the tags associated with a node'''
281 '''return the tags associated with a node'''
282 if not self.nodetagscache:
282 if not self.nodetagscache:
283 self.nodetagscache = {}
283 self.nodetagscache = {}
284 for t, n in self.tags().items():
284 for t, n in self.tags().items():
285 self.nodetagscache.setdefault(n, []).append(t)
285 self.nodetagscache.setdefault(n, []).append(t)
286 return self.nodetagscache.get(node, [])
286 return self.nodetagscache.get(node, [])
287
287
288 def lookup(self, key):
288 def lookup(self, key):
289 try:
289 try:
290 return self.tags()[key]
290 return self.tags()[key]
291 except KeyError:
291 except KeyError:
292 try:
292 try:
293 return self.changelog.lookup(key)
293 return self.changelog.lookup(key)
294 except:
294 except:
295 raise repo.RepoError(_("unknown revision '%s'") % key)
295 raise repo.RepoError(_("unknown revision '%s'") % key)
296
296
297 def dev(self):
297 def dev(self):
298 return os.lstat(self.path).st_dev
298 return os.lstat(self.path).st_dev
299
299
300 def local(self):
300 def local(self):
301 return True
301 return True
302
302
303 def join(self, f):
303 def join(self, f):
304 return os.path.join(self.path, f)
304 return os.path.join(self.path, f)
305
305
306 def wjoin(self, f):
306 def wjoin(self, f):
307 return os.path.join(self.root, f)
307 return os.path.join(self.root, f)
308
308
309 def file(self, f):
309 def file(self, f):
310 if f[0] == '/':
310 if f[0] == '/':
311 f = f[1:]
311 f = f[1:]
312 return filelog.filelog(self.opener, f, self.revlogversion)
312 return filelog.filelog(self.opener, f, self.revlogversion)
313
313
314 def changectx(self, changeid):
314 def changectx(self, changeid):
315 return context.changectx(self, changeid)
315 return context.changectx(self, changeid)
316
316
317 def filectx(self, path, changeid=None, fileid=None):
317 def filectx(self, path, changeid=None, fileid=None):
318 """changeid can be a changeset revision, node, or tag.
318 """changeid can be a changeset revision, node, or tag.
319 fileid can be a file revision or node."""
319 fileid can be a file revision or node."""
320 return context.filectx(self, path, changeid, fileid)
320 return context.filectx(self, path, changeid, fileid)
321
321
322 def getcwd(self):
322 def getcwd(self):
323 return self.dirstate.getcwd()
323 return self.dirstate.getcwd()
324
324
325 def wfile(self, f, mode='r'):
325 def wfile(self, f, mode='r'):
326 return self.wopener(f, mode)
326 return self.wopener(f, mode)
327
327
328 def wread(self, filename):
328 def wread(self, filename):
329 if self.encodepats == None:
329 if self.encodepats == None:
330 l = []
330 l = []
331 for pat, cmd in self.ui.configitems("encode"):
331 for pat, cmd in self.ui.configitems("encode"):
332 mf = util.matcher(self.root, "", [pat], [], [])[1]
332 mf = util.matcher(self.root, "", [pat], [], [])[1]
333 l.append((mf, cmd))
333 l.append((mf, cmd))
334 self.encodepats = l
334 self.encodepats = l
335
335
336 data = self.wopener(filename, 'r').read()
336 data = self.wopener(filename, 'r').read()
337
337
338 for mf, cmd in self.encodepats:
338 for mf, cmd in self.encodepats:
339 if mf(filename):
339 if mf(filename):
340 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
340 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
341 data = util.filter(data, cmd)
341 data = util.filter(data, cmd)
342 break
342 break
343
343
344 return data
344 return data
345
345
346 def wwrite(self, filename, data, fd=None):
346 def wwrite(self, filename, data, fd=None):
347 if self.decodepats == None:
347 if self.decodepats == None:
348 l = []
348 l = []
349 for pat, cmd in self.ui.configitems("decode"):
349 for pat, cmd in self.ui.configitems("decode"):
350 mf = util.matcher(self.root, "", [pat], [], [])[1]
350 mf = util.matcher(self.root, "", [pat], [], [])[1]
351 l.append((mf, cmd))
351 l.append((mf, cmd))
352 self.decodepats = l
352 self.decodepats = l
353
353
354 for mf, cmd in self.decodepats:
354 for mf, cmd in self.decodepats:
355 if mf(filename):
355 if mf(filename):
356 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
356 self.ui.debug(_("filtering %s through %s\n") % (filename, cmd))
357 data = util.filter(data, cmd)
357 data = util.filter(data, cmd)
358 break
358 break
359
359
360 if fd:
360 if fd:
361 return fd.write(data)
361 return fd.write(data)
362 return self.wopener(filename, 'w').write(data)
362 return self.wopener(filename, 'w').write(data)
363
363
364 def transaction(self):
364 def transaction(self):
365 tr = self.transhandle
365 tr = self.transhandle
366 if tr != None and tr.running():
366 if tr != None and tr.running():
367 return tr.nest()
367 return tr.nest()
368
368
369 # save dirstate for rollback
369 # save dirstate for rollback
370 try:
370 try:
371 ds = self.opener("dirstate").read()
371 ds = self.opener("dirstate").read()
372 except IOError:
372 except IOError:
373 ds = ""
373 ds = ""
374 self.opener("journal.dirstate", "w").write(ds)
374 self.opener("journal.dirstate", "w").write(ds)
375
375
376 tr = transaction.transaction(self.ui.warn, self.opener,
376 tr = transaction.transaction(self.ui.warn, self.opener,
377 self.join("journal"),
377 self.join("journal"),
378 aftertrans(self.path))
378 aftertrans(self.path))
379 self.transhandle = tr
379 self.transhandle = tr
380 return tr
380 return tr
381
381
382 def recover(self):
382 def recover(self):
383 l = self.lock()
383 l = self.lock()
384 if os.path.exists(self.join("journal")):
384 if os.path.exists(self.join("journal")):
385 self.ui.status(_("rolling back interrupted transaction\n"))
385 self.ui.status(_("rolling back interrupted transaction\n"))
386 transaction.rollback(self.opener, self.join("journal"))
386 transaction.rollback(self.opener, self.join("journal"))
387 self.reload()
387 self.reload()
388 return True
388 return True
389 else:
389 else:
390 self.ui.warn(_("no interrupted transaction available\n"))
390 self.ui.warn(_("no interrupted transaction available\n"))
391 return False
391 return False
392
392
393 def rollback(self, wlock=None):
393 def rollback(self, wlock=None):
394 if not wlock:
394 if not wlock:
395 wlock = self.wlock()
395 wlock = self.wlock()
396 l = self.lock()
396 l = self.lock()
397 if os.path.exists(self.join("undo")):
397 if os.path.exists(self.join("undo")):
398 self.ui.status(_("rolling back last transaction\n"))
398 self.ui.status(_("rolling back last transaction\n"))
399 transaction.rollback(self.opener, self.join("undo"))
399 transaction.rollback(self.opener, self.join("undo"))
400 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
400 util.rename(self.join("undo.dirstate"), self.join("dirstate"))
401 self.reload()
401 self.reload()
402 self.wreload()
402 self.wreload()
403 else:
403 else:
404 self.ui.warn(_("no rollback information available\n"))
404 self.ui.warn(_("no rollback information available\n"))
405
405
406 def wreload(self):
406 def wreload(self):
407 self.dirstate.read()
407 self.dirstate.read()
408
408
409 def reload(self):
409 def reload(self):
410 self.changelog.load()
410 self.changelog.load()
411 self.manifest.load()
411 self.manifest.load()
412 self.tagscache = None
412 self.tagscache = None
413 self.nodetagscache = None
413 self.nodetagscache = None
414
414
415 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
415 def do_lock(self, lockname, wait, releasefn=None, acquirefn=None,
416 desc=None):
416 desc=None):
417 try:
417 try:
418 l = lock.lock(self.join(lockname), 0, releasefn, desc=desc)
418 l = lock.lock(self.join(lockname), 0, releasefn, desc=desc)
419 except lock.LockHeld, inst:
419 except lock.LockHeld, inst:
420 if not wait:
420 if not wait:
421 raise
421 raise
422 self.ui.warn(_("waiting for lock on %s held by %s\n") %
422 self.ui.warn(_("waiting for lock on %s held by %s\n") %
423 (desc, inst.args[0]))
423 (desc, inst.args[0]))
424 # default to 600 seconds timeout
424 # default to 600 seconds timeout
425 l = lock.lock(self.join(lockname),
425 l = lock.lock(self.join(lockname),
426 int(self.ui.config("ui", "timeout") or 600),
426 int(self.ui.config("ui", "timeout") or 600),
427 releasefn, desc=desc)
427 releasefn, desc=desc)
428 if acquirefn:
428 if acquirefn:
429 acquirefn()
429 acquirefn()
430 return l
430 return l
431
431
432 def lock(self, wait=1):
432 def lock(self, wait=1):
433 return self.do_lock("lock", wait, acquirefn=self.reload,
433 return self.do_lock("lock", wait, acquirefn=self.reload,
434 desc=_('repository %s') % self.origroot)
434 desc=_('repository %s') % self.origroot)
435
435
436 def wlock(self, wait=1):
436 def wlock(self, wait=1):
437 return self.do_lock("wlock", wait, self.dirstate.write,
437 return self.do_lock("wlock", wait, self.dirstate.write,
438 self.wreload,
438 self.wreload,
439 desc=_('working directory of %s') % self.origroot)
439 desc=_('working directory of %s') % self.origroot)
440
440
441 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
441 def checkfilemerge(self, filename, text, filelog, manifest1, manifest2):
442 "determine whether a new filenode is needed"
442 "determine whether a new filenode is needed"
443 fp1 = manifest1.get(filename, nullid)
443 fp1 = manifest1.get(filename, nullid)
444 fp2 = manifest2.get(filename, nullid)
444 fp2 = manifest2.get(filename, nullid)
445
445
446 if fp2 != nullid:
446 if fp2 != nullid:
447 # is one parent an ancestor of the other?
447 # is one parent an ancestor of the other?
448 fpa = filelog.ancestor(fp1, fp2)
448 fpa = filelog.ancestor(fp1, fp2)
449 if fpa == fp1:
449 if fpa == fp1:
450 fp1, fp2 = fp2, nullid
450 fp1, fp2 = fp2, nullid
451 elif fpa == fp2:
451 elif fpa == fp2:
452 fp2 = nullid
452 fp2 = nullid
453
453
454 # is the file unmodified from the parent? report existing entry
454 # is the file unmodified from the parent? report existing entry
455 if fp2 == nullid and text == filelog.read(fp1):
455 if fp2 == nullid and text == filelog.read(fp1):
456 return (fp1, None, None)
456 return (fp1, None, None)
457
457
458 return (None, fp1, fp2)
458 return (None, fp1, fp2)
459
459
460 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
460 def rawcommit(self, files, text, user, date, p1=None, p2=None, wlock=None):
461 orig_parent = self.dirstate.parents()[0] or nullid
461 orig_parent = self.dirstate.parents()[0] or nullid
462 p1 = p1 or self.dirstate.parents()[0] or nullid
462 p1 = p1 or self.dirstate.parents()[0] or nullid
463 p2 = p2 or self.dirstate.parents()[1] or nullid
463 p2 = p2 or self.dirstate.parents()[1] or nullid
464 c1 = self.changelog.read(p1)
464 c1 = self.changelog.read(p1)
465 c2 = self.changelog.read(p2)
465 c2 = self.changelog.read(p2)
466 m1 = self.manifest.read(c1[0])
466 m1 = self.manifest.read(c1[0])
467 mf1 = self.manifest.readflags(c1[0])
467 mf1 = self.manifest.readflags(c1[0])
468 m2 = self.manifest.read(c2[0])
468 m2 = self.manifest.read(c2[0])
469 changed = []
469 changed = []
470
470
471 if orig_parent == p1:
471 if orig_parent == p1:
472 update_dirstate = 1
472 update_dirstate = 1
473 else:
473 else:
474 update_dirstate = 0
474 update_dirstate = 0
475
475
476 if not wlock:
476 if not wlock:
477 wlock = self.wlock()
477 wlock = self.wlock()
478 l = self.lock()
478 l = self.lock()
479 tr = self.transaction()
479 tr = self.transaction()
480 mm = m1.copy()
480 mm = m1.copy()
481 mfm = mf1.copy()
481 mfm = mf1.copy()
482 linkrev = self.changelog.count()
482 linkrev = self.changelog.count()
483 for f in files:
483 for f in files:
484 try:
484 try:
485 t = self.wread(f)
485 t = self.wread(f)
486 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
486 tm = util.is_exec(self.wjoin(f), mfm.get(f, False))
487 r = self.file(f)
487 r = self.file(f)
488 mfm[f] = tm
488 mfm[f] = tm
489
489
490 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
490 (entry, fp1, fp2) = self.checkfilemerge(f, t, r, m1, m2)
491 if entry:
491 if entry:
492 mm[f] = entry
492 mm[f] = entry
493 continue
493 continue
494
494
495 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
495 mm[f] = r.add(t, {}, tr, linkrev, fp1, fp2)
496 changed.append(f)
496 changed.append(f)
497 if update_dirstate:
497 if update_dirstate:
498 self.dirstate.update([f], "n")
498 self.dirstate.update([f], "n")
499 except IOError:
499 except IOError:
500 try:
500 try:
501 del mm[f]
501 del mm[f]
502 del mfm[f]
502 del mfm[f]
503 if update_dirstate:
503 if update_dirstate:
504 self.dirstate.forget([f])
504 self.dirstate.forget([f])
505 except:
505 except:
506 # deleted from p2?
506 # deleted from p2?
507 pass
507 pass
508
508
509 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
509 mnode = self.manifest.add(mm, mfm, tr, linkrev, c1[0], c2[0])
510 user = user or self.ui.username()
510 user = user or self.ui.username()
511 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
511 n = self.changelog.add(mnode, changed, text, tr, p1, p2, user, date)
512 tr.close()
512 tr.close()
513 if update_dirstate:
513 if update_dirstate:
514 self.dirstate.setparents(n, nullid)
514 self.dirstate.setparents(n, nullid)
515
515
516 def commit(self, files=None, text="", user=None, date=None,
516 def commit(self, files=None, text="", user=None, date=None,
517 match=util.always, force=False, lock=None, wlock=None,
517 match=util.always, force=False, lock=None, wlock=None,
518 force_editor=False):
518 force_editor=False):
519 commit = []
519 commit = []
520 remove = []
520 remove = []
521 changed = []
521 changed = []
522
522
523 if files:
523 if files:
524 for f in files:
524 for f in files:
525 s = self.dirstate.state(f)
525 s = self.dirstate.state(f)
526 if s in 'nmai':
526 if s in 'nmai':
527 commit.append(f)
527 commit.append(f)
528 elif s == 'r':
528 elif s == 'r':
529 remove.append(f)
529 remove.append(f)
530 else:
530 else:
531 self.ui.warn(_("%s not tracked!\n") % f)
531 self.ui.warn(_("%s not tracked!\n") % f)
532 else:
532 else:
533 modified, added, removed, deleted, unknown = self.changes(match=match)
533 modified, added, removed, deleted, unknown = self.changes(match=match)
534 commit = modified + added
534 commit = modified + added
535 remove = removed
535 remove = removed
536
536
537 p1, p2 = self.dirstate.parents()
537 p1, p2 = self.dirstate.parents()
538 c1 = self.changelog.read(p1)
538 c1 = self.changelog.read(p1)
539 c2 = self.changelog.read(p2)
539 c2 = self.changelog.read(p2)
540 m1 = self.manifest.read(c1[0])
540 m1 = self.manifest.read(c1[0])
541 mf1 = self.manifest.readflags(c1[0])
541 mf1 = self.manifest.readflags(c1[0])
542 m2 = self.manifest.read(c2[0])
542 m2 = self.manifest.read(c2[0])
543
543
544 if not commit and not remove and not force and p2 == nullid:
544 if not commit and not remove and not force and p2 == nullid:
545 self.ui.status(_("nothing changed\n"))
545 self.ui.status(_("nothing changed\n"))
546 return None
546 return None
547
547
548 xp1 = hex(p1)
548 xp1 = hex(p1)
549 if p2 == nullid: xp2 = ''
549 if p2 == nullid: xp2 = ''
550 else: xp2 = hex(p2)
550 else: xp2 = hex(p2)
551
551
552 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
552 self.hook("precommit", throw=True, parent1=xp1, parent2=xp2)
553
553
554 if not wlock:
554 if not wlock:
555 wlock = self.wlock()
555 wlock = self.wlock()
556 if not lock:
556 if not lock:
557 lock = self.lock()
557 lock = self.lock()
558 tr = self.transaction()
558 tr = self.transaction()
559
559
560 # check in files
560 # check in files
561 new = {}
561 new = {}
562 linkrev = self.changelog.count()
562 linkrev = self.changelog.count()
563 commit.sort()
563 commit.sort()
564 for f in commit:
564 for f in commit:
565 self.ui.note(f + "\n")
565 self.ui.note(f + "\n")
566 try:
566 try:
567 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
567 mf1[f] = util.is_exec(self.wjoin(f), mf1.get(f, False))
568 t = self.wread(f)
568 t = self.wread(f)
569 except IOError:
569 except IOError:
570 self.ui.warn(_("trouble committing %s!\n") % f)
570 self.ui.warn(_("trouble committing %s!\n") % f)
571 raise
571 raise
572
572
573 r = self.file(f)
573 r = self.file(f)
574
574
575 meta = {}
575 meta = {}
576 cp = self.dirstate.copied(f)
576 cp = self.dirstate.copied(f)
577 if cp:
577 if cp:
578 meta["copy"] = cp
578 meta["copy"] = cp
579 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
579 meta["copyrev"] = hex(m1.get(cp, m2.get(cp, nullid)))
580 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
580 self.ui.debug(_(" %s: copy %s:%s\n") % (f, cp, meta["copyrev"]))
581 fp1, fp2 = nullid, nullid
581 fp1, fp2 = nullid, nullid
582 else:
582 else:
583 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
583 entry, fp1, fp2 = self.checkfilemerge(f, t, r, m1, m2)
584 if entry:
584 if entry:
585 new[f] = entry
585 new[f] = entry
586 continue
586 continue
587
587
588 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
588 new[f] = r.add(t, meta, tr, linkrev, fp1, fp2)
589 # remember what we've added so that we can later calculate
589 # remember what we've added so that we can later calculate
590 # the files to pull from a set of changesets
590 # the files to pull from a set of changesets
591 changed.append(f)
591 changed.append(f)
592
592
593 # update manifest
593 # update manifest
594 m1 = m1.copy()
594 m1 = m1.copy()
595 m1.update(new)
595 m1.update(new)
596 for f in remove:
596 for f in remove:
597 if f in m1:
597 if f in m1:
598 del m1[f]
598 del m1[f]
599 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
599 mn = self.manifest.add(m1, mf1, tr, linkrev, c1[0], c2[0],
600 (new, remove))
600 (new, remove))
601
601
602 # add changeset
602 # add changeset
603 new = new.keys()
603 new = new.keys()
604 new.sort()
604 new.sort()
605
605
606 user = user or self.ui.username()
606 user = user or self.ui.username()
607 if not text or force_editor:
607 if not text or force_editor:
608 edittext = []
608 edittext = []
609 if text:
609 if text:
610 edittext.append(text)
610 edittext.append(text)
611 edittext.append("")
611 edittext.append("")
612 if p2 != nullid:
612 if p2 != nullid:
613 edittext.append("HG: branch merge")
613 edittext.append("HG: branch merge")
614 edittext.extend(["HG: changed %s" % f for f in changed])
614 edittext.extend(["HG: changed %s" % f for f in changed])
615 edittext.extend(["HG: removed %s" % f for f in remove])
615 edittext.extend(["HG: removed %s" % f for f in remove])
616 if not changed and not remove:
616 if not changed and not remove:
617 edittext.append("HG: no files changed")
617 edittext.append("HG: no files changed")
618 edittext.append("")
618 edittext.append("")
619 # run editor in the repository root
619 # run editor in the repository root
620 olddir = os.getcwd()
620 olddir = os.getcwd()
621 os.chdir(self.root)
621 os.chdir(self.root)
622 text = self.ui.edit("\n".join(edittext), user)
622 text = self.ui.edit("\n".join(edittext), user)
623 os.chdir(olddir)
623 os.chdir(olddir)
624
624
625 lines = [line.rstrip() for line in text.rstrip().splitlines()]
625 lines = [line.rstrip() for line in text.rstrip().splitlines()]
626 while lines and not lines[0]:
626 while lines and not lines[0]:
627 del lines[0]
627 del lines[0]
628 if not lines:
628 if not lines:
629 return None
629 return None
630 text = '\n'.join(lines)
630 text = '\n'.join(lines)
631 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
631 n = self.changelog.add(mn, changed + remove, text, tr, p1, p2, user, date)
632 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
632 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
633 parent2=xp2)
633 parent2=xp2)
634 tr.close()
634 tr.close()
635
635
636 self.dirstate.setparents(n)
636 self.dirstate.setparents(n)
637 self.dirstate.update(new, "n")
637 self.dirstate.update(new, "n")
638 self.dirstate.forget(remove)
638 self.dirstate.forget(remove)
639
639
640 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
640 self.hook("commit", node=hex(n), parent1=xp1, parent2=xp2)
641 return n
641 return n
642
642
643 def walk(self, node=None, files=[], match=util.always, badmatch=None):
643 def walk(self, node=None, files=[], match=util.always, badmatch=None):
644 if node:
644 if node:
645 fdict = dict.fromkeys(files)
645 fdict = dict.fromkeys(files)
646 for fn in self.manifest.read(self.changelog.read(node)[0]):
646 for fn in self.manifest.read(self.changelog.read(node)[0]):
647 fdict.pop(fn, None)
647 fdict.pop(fn, None)
648 if match(fn):
648 if match(fn):
649 yield 'm', fn
649 yield 'm', fn
650 for fn in fdict:
650 for fn in fdict:
651 if badmatch and badmatch(fn):
651 if badmatch and badmatch(fn):
652 if match(fn):
652 if match(fn):
653 yield 'b', fn
653 yield 'b', fn
654 else:
654 else:
655 self.ui.warn(_('%s: No such file in rev %s\n') % (
655 self.ui.warn(_('%s: No such file in rev %s\n') % (
656 util.pathto(self.getcwd(), fn), short(node)))
656 util.pathto(self.getcwd(), fn), short(node)))
657 else:
657 else:
658 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
658 for src, fn in self.dirstate.walk(files, match, badmatch=badmatch):
659 yield src, fn
659 yield src, fn
660
660
661 def changes(self, node1=None, node2=None, files=[], match=util.always,
661 def changes(self, node1=None, node2=None, files=[], match=util.always,
662 wlock=None, show_ignored=None):
662 wlock=None, show_ignored=None):
663 """return changes between two nodes or node and working directory
663 """return changes between two nodes or node and working directory
664
664
665 If node1 is None, use the first dirstate parent instead.
665 If node1 is None, use the first dirstate parent instead.
666 If node2 is None, compare node1 with working directory.
666 If node2 is None, compare node1 with working directory.
667 """
667 """
668
668
669 def fcmp(fn, mf):
669 def fcmp(fn, mf):
670 t1 = self.wread(fn)
670 t1 = self.wread(fn)
671 t2 = self.file(fn).read(mf.get(fn, nullid))
671 t2 = self.file(fn).read(mf.get(fn, nullid))
672 return cmp(t1, t2)
672 return cmp(t1, t2)
673
673
674 def mfmatches(node):
674 def mfmatches(node):
675 change = self.changelog.read(node)
675 change = self.changelog.read(node)
676 mf = dict(self.manifest.read(change[0]))
676 mf = dict(self.manifest.read(change[0]))
677 for fn in mf.keys():
677 for fn in mf.keys():
678 if not match(fn):
678 if not match(fn):
679 del mf[fn]
679 del mf[fn]
680 return mf
680 return mf
681
681
682 modified, added, removed, deleted, unknown, ignored = [],[],[],[],[],[]
682 modified, added, removed, deleted, unknown, ignored = [],[],[],[],[],[]
683 compareworking = False
683 compareworking = False
684 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
684 if not node1 or (not node2 and node1 == self.dirstate.parents()[0]):
685 compareworking = True
685 compareworking = True
686
686
687 if not compareworking:
687 if not compareworking:
688 # read the manifest from node1 before the manifest from node2,
688 # read the manifest from node1 before the manifest from node2,
689 # so that we'll hit the manifest cache if we're going through
689 # so that we'll hit the manifest cache if we're going through
690 # all the revisions in parent->child order.
690 # all the revisions in parent->child order.
691 mf1 = mfmatches(node1)
691 mf1 = mfmatches(node1)
692
692
693 # are we comparing the working directory?
693 # are we comparing the working directory?
694 if not node2:
694 if not node2:
695 if not wlock:
695 if not wlock:
696 try:
696 try:
697 wlock = self.wlock(wait=0)
697 wlock = self.wlock(wait=0)
698 except lock.LockException:
698 except lock.LockException:
699 wlock = None
699 wlock = None
700 lookup, modified, added, removed, deleted, unknown, ignored = (
700 lookup, modified, added, removed, deleted, unknown, ignored = (
701 self.dirstate.changes(files, match, show_ignored))
701 self.dirstate.changes(files, match, show_ignored))
702
702
703 # are we comparing working dir against its parent?
703 # are we comparing working dir against its parent?
704 if compareworking:
704 if compareworking:
705 if lookup:
705 if lookup:
706 # do a full compare of any files that might have changed
706 # do a full compare of any files that might have changed
707 mf2 = mfmatches(self.dirstate.parents()[0])
707 mf2 = mfmatches(self.dirstate.parents()[0])
708 for f in lookup:
708 for f in lookup:
709 if fcmp(f, mf2):
709 if fcmp(f, mf2):
710 modified.append(f)
710 modified.append(f)
711 elif wlock is not None:
711 elif wlock is not None:
712 self.dirstate.update([f], "n")
712 self.dirstate.update([f], "n")
713 else:
713 else:
714 # we are comparing working dir against non-parent
714 # we are comparing working dir against non-parent
715 # generate a pseudo-manifest for the working dir
715 # generate a pseudo-manifest for the working dir
716 mf2 = mfmatches(self.dirstate.parents()[0])
716 mf2 = mfmatches(self.dirstate.parents()[0])
717 for f in lookup + modified + added:
717 for f in lookup + modified + added:
718 mf2[f] = ""
718 mf2[f] = ""
719 for f in removed:
719 for f in removed:
720 if f in mf2:
720 if f in mf2:
721 del mf2[f]
721 del mf2[f]
722 else:
722 else:
723 # we are comparing two revisions
723 # we are comparing two revisions
724 deleted, unknown, ignored = [], [], []
724 deleted, unknown, ignored = [], [], []
725 mf2 = mfmatches(node2)
725 mf2 = mfmatches(node2)
726
726
727 if not compareworking:
727 if not compareworking:
728 # flush lists from dirstate before comparing manifests
728 # flush lists from dirstate before comparing manifests
729 modified, added = [], []
729 modified, added = [], []
730
730
731 # make sure to sort the files so we talk to the disk in a
731 # make sure to sort the files so we talk to the disk in a
732 # reasonable order
732 # reasonable order
733 mf2keys = mf2.keys()
733 mf2keys = mf2.keys()
734 mf2keys.sort()
734 mf2keys.sort()
735 for fn in mf2keys:
735 for fn in mf2keys:
736 if mf1.has_key(fn):
736 if mf1.has_key(fn):
737 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
737 if mf1[fn] != mf2[fn] and (mf2[fn] != "" or fcmp(fn, mf1)):
738 modified.append(fn)
738 modified.append(fn)
739 del mf1[fn]
739 del mf1[fn]
740 else:
740 else:
741 added.append(fn)
741 added.append(fn)
742
742
743 removed = mf1.keys()
743 removed = mf1.keys()
744
744
745 # sort and return results:
745 # sort and return results:
746 for l in modified, added, removed, deleted, unknown, ignored:
746 for l in modified, added, removed, deleted, unknown, ignored:
747 l.sort()
747 l.sort()
748 if show_ignored is None:
748 if show_ignored is None:
749 return (modified, added, removed, deleted, unknown)
749 return (modified, added, removed, deleted, unknown)
750 else:
750 else:
751 return (modified, added, removed, deleted, unknown, ignored)
751 return (modified, added, removed, deleted, unknown, ignored)
752
752
753 def add(self, list, wlock=None):
753 def add(self, list, wlock=None):
754 if not wlock:
754 if not wlock:
755 wlock = self.wlock()
755 wlock = self.wlock()
756 for f in list:
756 for f in list:
757 p = self.wjoin(f)
757 p = self.wjoin(f)
758 if not os.path.exists(p):
758 if not os.path.exists(p):
759 self.ui.warn(_("%s does not exist!\n") % f)
759 self.ui.warn(_("%s does not exist!\n") % f)
760 elif not os.path.isfile(p):
760 elif not os.path.isfile(p):
761 self.ui.warn(_("%s not added: only files supported currently\n")
761 self.ui.warn(_("%s not added: only files supported currently\n")
762 % f)
762 % f)
763 elif self.dirstate.state(f) in 'an':
763 elif self.dirstate.state(f) in 'an':
764 self.ui.warn(_("%s already tracked!\n") % f)
764 self.ui.warn(_("%s already tracked!\n") % f)
765 else:
765 else:
766 self.dirstate.update([f], "a")
766 self.dirstate.update([f], "a")
767
767
768 def forget(self, list, wlock=None):
768 def forget(self, list, wlock=None):
769 if not wlock:
769 if not wlock:
770 wlock = self.wlock()
770 wlock = self.wlock()
771 for f in list:
771 for f in list:
772 if self.dirstate.state(f) not in 'ai':
772 if self.dirstate.state(f) not in 'ai':
773 self.ui.warn(_("%s not added!\n") % f)
773 self.ui.warn(_("%s not added!\n") % f)
774 else:
774 else:
775 self.dirstate.forget([f])
775 self.dirstate.forget([f])
776
776
777 def remove(self, list, unlink=False, wlock=None):
777 def remove(self, list, unlink=False, wlock=None):
778 if unlink:
778 if unlink:
779 for f in list:
779 for f in list:
780 try:
780 try:
781 util.unlink(self.wjoin(f))
781 util.unlink(self.wjoin(f))
782 except OSError, inst:
782 except OSError, inst:
783 if inst.errno != errno.ENOENT:
783 if inst.errno != errno.ENOENT:
784 raise
784 raise
785 if not wlock:
785 if not wlock:
786 wlock = self.wlock()
786 wlock = self.wlock()
787 for f in list:
787 for f in list:
788 p = self.wjoin(f)
788 p = self.wjoin(f)
789 if os.path.exists(p):
789 if os.path.exists(p):
790 self.ui.warn(_("%s still exists!\n") % f)
790 self.ui.warn(_("%s still exists!\n") % f)
791 elif self.dirstate.state(f) == 'a':
791 elif self.dirstate.state(f) == 'a':
792 self.dirstate.forget([f])
792 self.dirstate.forget([f])
793 elif f not in self.dirstate:
793 elif f not in self.dirstate:
794 self.ui.warn(_("%s not tracked!\n") % f)
794 self.ui.warn(_("%s not tracked!\n") % f)
795 else:
795 else:
796 self.dirstate.update([f], "r")
796 self.dirstate.update([f], "r")
797
797
798 def undelete(self, list, wlock=None):
798 def undelete(self, list, wlock=None):
799 p = self.dirstate.parents()[0]
799 p = self.dirstate.parents()[0]
800 mn = self.changelog.read(p)[0]
800 mn = self.changelog.read(p)[0]
801 mf = self.manifest.readflags(mn)
801 mf = self.manifest.readflags(mn)
802 m = self.manifest.read(mn)
802 m = self.manifest.read(mn)
803 if not wlock:
803 if not wlock:
804 wlock = self.wlock()
804 wlock = self.wlock()
805 for f in list:
805 for f in list:
806 if self.dirstate.state(f) not in "r":
806 if self.dirstate.state(f) not in "r":
807 self.ui.warn("%s not removed!\n" % f)
807 self.ui.warn("%s not removed!\n" % f)
808 else:
808 else:
809 t = self.file(f).read(m[f])
809 t = self.file(f).read(m[f])
810 self.wwrite(f, t)
810 self.wwrite(f, t)
811 util.set_exec(self.wjoin(f), mf[f])
811 util.set_exec(self.wjoin(f), mf[f])
812 self.dirstate.update([f], "n")
812 self.dirstate.update([f], "n")
813
813
814 def copy(self, source, dest, wlock=None):
814 def copy(self, source, dest, wlock=None):
815 p = self.wjoin(dest)
815 p = self.wjoin(dest)
816 if not os.path.exists(p):
816 if not os.path.exists(p):
817 self.ui.warn(_("%s does not exist!\n") % dest)
817 self.ui.warn(_("%s does not exist!\n") % dest)
818 elif not os.path.isfile(p):
818 elif not os.path.isfile(p):
819 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
819 self.ui.warn(_("copy failed: %s is not a file\n") % dest)
820 else:
820 else:
821 if not wlock:
821 if not wlock:
822 wlock = self.wlock()
822 wlock = self.wlock()
823 if self.dirstate.state(dest) == '?':
823 if self.dirstate.state(dest) == '?':
824 self.dirstate.update([dest], "a")
824 self.dirstate.update([dest], "a")
825 self.dirstate.copy(source, dest)
825 self.dirstate.copy(source, dest)
826
826
827 def heads(self, start=None):
827 def heads(self, start=None):
828 heads = self.changelog.heads(start)
828 heads = self.changelog.heads(start)
829 # sort the output in rev descending order
829 # sort the output in rev descending order
830 heads = [(-self.changelog.rev(h), h) for h in heads]
830 heads = [(-self.changelog.rev(h), h) for h in heads]
831 heads.sort()
831 heads.sort()
832 return [n for (r, n) in heads]
832 return [n for (r, n) in heads]
833
833
834 # branchlookup returns a dict giving a list of branches for
834 # branchlookup returns a dict giving a list of branches for
835 # each head. A branch is defined as the tag of a node or
835 # each head. A branch is defined as the tag of a node or
836 # the branch of the node's parents. If a node has multiple
836 # the branch of the node's parents. If a node has multiple
837 # branch tags, tags are eliminated if they are visible from other
837 # branch tags, tags are eliminated if they are visible from other
838 # branch tags.
838 # branch tags.
839 #
839 #
840 # So, for this graph: a->b->c->d->e
840 # So, for this graph: a->b->c->d->e
841 # \ /
841 # \ /
842 # aa -----/
842 # aa -----/
843 # a has tag 2.6.12
843 # a has tag 2.6.12
844 # d has tag 2.6.13
844 # d has tag 2.6.13
845 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
845 # e would have branch tags for 2.6.12 and 2.6.13. Because the node
846 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
846 # for 2.6.12 can be reached from the node 2.6.13, that is eliminated
847 # from the list.
847 # from the list.
848 #
848 #
849 # It is possible that more than one head will have the same branch tag.
849 # It is possible that more than one head will have the same branch tag.
850 # callers need to check the result for multiple heads under the same
850 # callers need to check the result for multiple heads under the same
851 # branch tag if that is a problem for them (ie checkout of a specific
851 # branch tag if that is a problem for them (ie checkout of a specific
852 # branch).
852 # branch).
853 #
853 #
854 # passing in a specific branch will limit the depth of the search
854 # passing in a specific branch will limit the depth of the search
855 # through the parents. It won't limit the branches returned in the
855 # through the parents. It won't limit the branches returned in the
856 # result though.
856 # result though.
857 def branchlookup(self, heads=None, branch=None):
857 def branchlookup(self, heads=None, branch=None):
858 if not heads:
858 if not heads:
859 heads = self.heads()
859 heads = self.heads()
860 headt = [ h for h in heads ]
860 headt = [ h for h in heads ]
861 chlog = self.changelog
861 chlog = self.changelog
862 branches = {}
862 branches = {}
863 merges = []
863 merges = []
864 seenmerge = {}
864 seenmerge = {}
865
865
866 # traverse the tree once for each head, recording in the branches
866 # traverse the tree once for each head, recording in the branches
867 # dict which tags are visible from this head. The branches
867 # dict which tags are visible from this head. The branches
868 # dict also records which tags are visible from each tag
868 # dict also records which tags are visible from each tag
869 # while we traverse.
869 # while we traverse.
870 while headt or merges:
870 while headt or merges:
871 if merges:
871 if merges:
872 n, found = merges.pop()
872 n, found = merges.pop()
873 visit = [n]
873 visit = [n]
874 else:
874 else:
875 h = headt.pop()
875 h = headt.pop()
876 visit = [h]
876 visit = [h]
877 found = [h]
877 found = [h]
878 seen = {}
878 seen = {}
879 while visit:
879 while visit:
880 n = visit.pop()
880 n = visit.pop()
881 if n in seen:
881 if n in seen:
882 continue
882 continue
883 pp = chlog.parents(n)
883 pp = chlog.parents(n)
884 tags = self.nodetags(n)
884 tags = self.nodetags(n)
885 if tags:
885 if tags:
886 for x in tags:
886 for x in tags:
887 if x == 'tip':
887 if x == 'tip':
888 continue
888 continue
889 for f in found:
889 for f in found:
890 branches.setdefault(f, {})[n] = 1
890 branches.setdefault(f, {})[n] = 1
891 branches.setdefault(n, {})[n] = 1
891 branches.setdefault(n, {})[n] = 1
892 break
892 break
893 if n not in found:
893 if n not in found:
894 found.append(n)
894 found.append(n)
895 if branch in tags:
895 if branch in tags:
896 continue
896 continue
897 seen[n] = 1
897 seen[n] = 1
898 if pp[1] != nullid and n not in seenmerge:
898 if pp[1] != nullid and n not in seenmerge:
899 merges.append((pp[1], [x for x in found]))
899 merges.append((pp[1], [x for x in found]))
900 seenmerge[n] = 1
900 seenmerge[n] = 1
901 if pp[0] != nullid:
901 if pp[0] != nullid:
902 visit.append(pp[0])
902 visit.append(pp[0])
903 # traverse the branches dict, eliminating branch tags from each
903 # traverse the branches dict, eliminating branch tags from each
904 # head that are visible from another branch tag for that head.
904 # head that are visible from another branch tag for that head.
905 out = {}
905 out = {}
906 viscache = {}
906 viscache = {}
907 for h in heads:
907 for h in heads:
908 def visible(node):
908 def visible(node):
909 if node in viscache:
909 if node in viscache:
910 return viscache[node]
910 return viscache[node]
911 ret = {}
911 ret = {}
912 visit = [node]
912 visit = [node]
913 while visit:
913 while visit:
914 x = visit.pop()
914 x = visit.pop()
915 if x in viscache:
915 if x in viscache:
916 ret.update(viscache[x])
916 ret.update(viscache[x])
917 elif x not in ret:
917 elif x not in ret:
918 ret[x] = 1
918 ret[x] = 1
919 if x in branches:
919 if x in branches:
920 visit[len(visit):] = branches[x].keys()
920 visit[len(visit):] = branches[x].keys()
921 viscache[node] = ret
921 viscache[node] = ret
922 return ret
922 return ret
923 if h not in branches:
923 if h not in branches:
924 continue
924 continue
925 # O(n^2), but somewhat limited. This only searches the
925 # O(n^2), but somewhat limited. This only searches the
926 # tags visible from a specific head, not all the tags in the
926 # tags visible from a specific head, not all the tags in the
927 # whole repo.
927 # whole repo.
928 for b in branches[h]:
928 for b in branches[h]:
929 vis = False
929 vis = False
930 for bb in branches[h].keys():
930 for bb in branches[h].keys():
931 if b != bb:
931 if b != bb:
932 if b in visible(bb):
932 if b in visible(bb):
933 vis = True
933 vis = True
934 break
934 break
935 if not vis:
935 if not vis:
936 l = out.setdefault(h, [])
936 l = out.setdefault(h, [])
937 l[len(l):] = self.nodetags(b)
937 l[len(l):] = self.nodetags(b)
938 return out
938 return out
939
939
940 def branches(self, nodes):
940 def branches(self, nodes):
941 if not nodes:
941 if not nodes:
942 nodes = [self.changelog.tip()]
942 nodes = [self.changelog.tip()]
943 b = []
943 b = []
944 for n in nodes:
944 for n in nodes:
945 t = n
945 t = n
946 while 1:
946 while 1:
947 p = self.changelog.parents(n)
947 p = self.changelog.parents(n)
948 if p[1] != nullid or p[0] == nullid:
948 if p[1] != nullid or p[0] == nullid:
949 b.append((t, n, p[0], p[1]))
949 b.append((t, n, p[0], p[1]))
950 break
950 break
951 n = p[0]
951 n = p[0]
952 return b
952 return b
953
953
954 def between(self, pairs):
954 def between(self, pairs):
955 r = []
955 r = []
956
956
957 for top, bottom in pairs:
957 for top, bottom in pairs:
958 n, l, i = top, [], 0
958 n, l, i = top, [], 0
959 f = 1
959 f = 1
960
960
961 while n != bottom:
961 while n != bottom:
962 p = self.changelog.parents(n)[0]
962 p = self.changelog.parents(n)[0]
963 if i == f:
963 if i == f:
964 l.append(n)
964 l.append(n)
965 f = f * 2
965 f = f * 2
966 n = p
966 n = p
967 i += 1
967 i += 1
968
968
969 r.append(l)
969 r.append(l)
970
970
971 return r
971 return r
972
972
973 def findincoming(self, remote, base=None, heads=None, force=False):
973 def findincoming(self, remote, base=None, heads=None, force=False):
974 """Return list of roots of the subsets of missing nodes from remote
974 """Return list of roots of the subsets of missing nodes from remote
975
975
976 If base dict is specified, assume that these nodes and their parents
976 If base dict is specified, assume that these nodes and their parents
977 exist on the remote side and that no child of a node of base exists
977 exist on the remote side and that no child of a node of base exists
978 in both remote and self.
978 in both remote and self.
979 Furthermore base will be updated to include the nodes that exists
979 Furthermore base will be updated to include the nodes that exists
980 in self and remote but no children exists in self and remote.
980 in self and remote but no children exists in self and remote.
981 If a list of heads is specified, return only nodes which are heads
981 If a list of heads is specified, return only nodes which are heads
982 or ancestors of these heads.
982 or ancestors of these heads.
983
983
984 All the ancestors of base are in self and in remote.
984 All the ancestors of base are in self and in remote.
985 All the descendants of the list returned are missing in self.
985 All the descendants of the list returned are missing in self.
986 (and so we know that the rest of the nodes are missing in remote, see
986 (and so we know that the rest of the nodes are missing in remote, see
987 outgoing)
987 outgoing)
988 """
988 """
989 m = self.changelog.nodemap
989 m = self.changelog.nodemap
990 search = []
990 search = []
991 fetch = {}
991 fetch = {}
992 seen = {}
992 seen = {}
993 seenbranch = {}
993 seenbranch = {}
994 if base == None:
994 if base == None:
995 base = {}
995 base = {}
996
996
997 if not heads:
997 if not heads:
998 heads = remote.heads()
998 heads = remote.heads()
999
999
1000 if self.changelog.tip() == nullid:
1000 if self.changelog.tip() == nullid:
1001 base[nullid] = 1
1001 base[nullid] = 1
1002 if heads != [nullid]:
1002 if heads != [nullid]:
1003 return [nullid]
1003 return [nullid]
1004 return []
1004 return []
1005
1005
1006 # assume we're closer to the tip than the root
1006 # assume we're closer to the tip than the root
1007 # and start by examining the heads
1007 # and start by examining the heads
1008 self.ui.status(_("searching for changes\n"))
1008 self.ui.status(_("searching for changes\n"))
1009
1009
1010 unknown = []
1010 unknown = []
1011 for h in heads:
1011 for h in heads:
1012 if h not in m:
1012 if h not in m:
1013 unknown.append(h)
1013 unknown.append(h)
1014 else:
1014 else:
1015 base[h] = 1
1015 base[h] = 1
1016
1016
1017 if not unknown:
1017 if not unknown:
1018 return []
1018 return []
1019
1019
1020 req = dict.fromkeys(unknown)
1020 req = dict.fromkeys(unknown)
1021 reqcnt = 0
1021 reqcnt = 0
1022
1022
1023 # search through remote branches
1023 # search through remote branches
1024 # a 'branch' here is a linear segment of history, with four parts:
1024 # a 'branch' here is a linear segment of history, with four parts:
1025 # head, root, first parent, second parent
1025 # head, root, first parent, second parent
1026 # (a branch always has two parents (or none) by definition)
1026 # (a branch always has two parents (or none) by definition)
1027 unknown = remote.branches(unknown)
1027 unknown = remote.branches(unknown)
1028 while unknown:
1028 while unknown:
1029 r = []
1029 r = []
1030 while unknown:
1030 while unknown:
1031 n = unknown.pop(0)
1031 n = unknown.pop(0)
1032 if n[0] in seen:
1032 if n[0] in seen:
1033 continue
1033 continue
1034
1034
1035 self.ui.debug(_("examining %s:%s\n")
1035 self.ui.debug(_("examining %s:%s\n")
1036 % (short(n[0]), short(n[1])))
1036 % (short(n[0]), short(n[1])))
1037 if n[0] == nullid: # found the end of the branch
1037 if n[0] == nullid: # found the end of the branch
1038 pass
1038 pass
1039 elif n in seenbranch:
1039 elif n in seenbranch:
1040 self.ui.debug(_("branch already found\n"))
1040 self.ui.debug(_("branch already found\n"))
1041 continue
1041 continue
1042 elif n[1] and n[1] in m: # do we know the base?
1042 elif n[1] and n[1] in m: # do we know the base?
1043 self.ui.debug(_("found incomplete branch %s:%s\n")
1043 self.ui.debug(_("found incomplete branch %s:%s\n")
1044 % (short(n[0]), short(n[1])))
1044 % (short(n[0]), short(n[1])))
1045 search.append(n) # schedule branch range for scanning
1045 search.append(n) # schedule branch range for scanning
1046 seenbranch[n] = 1
1046 seenbranch[n] = 1
1047 else:
1047 else:
1048 if n[1] not in seen and n[1] not in fetch:
1048 if n[1] not in seen and n[1] not in fetch:
1049 if n[2] in m and n[3] in m:
1049 if n[2] in m and n[3] in m:
1050 self.ui.debug(_("found new changeset %s\n") %
1050 self.ui.debug(_("found new changeset %s\n") %
1051 short(n[1]))
1051 short(n[1]))
1052 fetch[n[1]] = 1 # earliest unknown
1052 fetch[n[1]] = 1 # earliest unknown
1053 for p in n[2:4]:
1053 for p in n[2:4]:
1054 if p in m:
1054 if p in m:
1055 base[p] = 1 # latest known
1055 base[p] = 1 # latest known
1056
1056
1057 for p in n[2:4]:
1057 for p in n[2:4]:
1058 if p not in req and p not in m:
1058 if p not in req and p not in m:
1059 r.append(p)
1059 r.append(p)
1060 req[p] = 1
1060 req[p] = 1
1061 seen[n[0]] = 1
1061 seen[n[0]] = 1
1062
1062
1063 if r:
1063 if r:
1064 reqcnt += 1
1064 reqcnt += 1
1065 self.ui.debug(_("request %d: %s\n") %
1065 self.ui.debug(_("request %d: %s\n") %
1066 (reqcnt, " ".join(map(short, r))))
1066 (reqcnt, " ".join(map(short, r))))
1067 for p in range(0, len(r), 10):
1067 for p in range(0, len(r), 10):
1068 for b in remote.branches(r[p:p+10]):
1068 for b in remote.branches(r[p:p+10]):
1069 self.ui.debug(_("received %s:%s\n") %
1069 self.ui.debug(_("received %s:%s\n") %
1070 (short(b[0]), short(b[1])))
1070 (short(b[0]), short(b[1])))
1071 unknown.append(b)
1071 unknown.append(b)
1072
1072
1073 # do binary search on the branches we found
1073 # do binary search on the branches we found
1074 while search:
1074 while search:
1075 n = search.pop(0)
1075 n = search.pop(0)
1076 reqcnt += 1
1076 reqcnt += 1
1077 l = remote.between([(n[0], n[1])])[0]
1077 l = remote.between([(n[0], n[1])])[0]
1078 l.append(n[1])
1078 l.append(n[1])
1079 p = n[0]
1079 p = n[0]
1080 f = 1
1080 f = 1
1081 for i in l:
1081 for i in l:
1082 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1082 self.ui.debug(_("narrowing %d:%d %s\n") % (f, len(l), short(i)))
1083 if i in m:
1083 if i in m:
1084 if f <= 2:
1084 if f <= 2:
1085 self.ui.debug(_("found new branch changeset %s\n") %
1085 self.ui.debug(_("found new branch changeset %s\n") %
1086 short(p))
1086 short(p))
1087 fetch[p] = 1
1087 fetch[p] = 1
1088 base[i] = 1
1088 base[i] = 1
1089 else:
1089 else:
1090 self.ui.debug(_("narrowed branch search to %s:%s\n")
1090 self.ui.debug(_("narrowed branch search to %s:%s\n")
1091 % (short(p), short(i)))
1091 % (short(p), short(i)))
1092 search.append((p, i))
1092 search.append((p, i))
1093 break
1093 break
1094 p, f = i, f * 2
1094 p, f = i, f * 2
1095
1095
1096 # sanity check our fetch list
1096 # sanity check our fetch list
1097 for f in fetch.keys():
1097 for f in fetch.keys():
1098 if f in m:
1098 if f in m:
1099 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1099 raise repo.RepoError(_("already have changeset ") + short(f[:4]))
1100
1100
1101 if base.keys() == [nullid]:
1101 if base.keys() == [nullid]:
1102 if force:
1102 if force:
1103 self.ui.warn(_("warning: repository is unrelated\n"))
1103 self.ui.warn(_("warning: repository is unrelated\n"))
1104 else:
1104 else:
1105 raise util.Abort(_("repository is unrelated"))
1105 raise util.Abort(_("repository is unrelated"))
1106
1106
1107 self.ui.note(_("found new changesets starting at ") +
1107 self.ui.note(_("found new changesets starting at ") +
1108 " ".join([short(f) for f in fetch]) + "\n")
1108 " ".join([short(f) for f in fetch]) + "\n")
1109
1109
1110 self.ui.debug(_("%d total queries\n") % reqcnt)
1110 self.ui.debug(_("%d total queries\n") % reqcnt)
1111
1111
1112 return fetch.keys()
1112 return fetch.keys()
1113
1113
1114 def findoutgoing(self, remote, base=None, heads=None, force=False):
1114 def findoutgoing(self, remote, base=None, heads=None, force=False):
1115 """Return list of nodes that are roots of subsets not in remote
1115 """Return list of nodes that are roots of subsets not in remote
1116
1116
1117 If base dict is specified, assume that these nodes and their parents
1117 If base dict is specified, assume that these nodes and their parents
1118 exist on the remote side.
1118 exist on the remote side.
1119 If a list of heads is specified, return only nodes which are heads
1119 If a list of heads is specified, return only nodes which are heads
1120 or ancestors of these heads, and return a second element which
1120 or ancestors of these heads, and return a second element which
1121 contains all remote heads which get new children.
1121 contains all remote heads which get new children.
1122 """
1122 """
1123 if base == None:
1123 if base == None:
1124 base = {}
1124 base = {}
1125 self.findincoming(remote, base, heads, force=force)
1125 self.findincoming(remote, base, heads, force=force)
1126
1126
1127 self.ui.debug(_("common changesets up to ")
1127 self.ui.debug(_("common changesets up to ")
1128 + " ".join(map(short, base.keys())) + "\n")
1128 + " ".join(map(short, base.keys())) + "\n")
1129
1129
1130 remain = dict.fromkeys(self.changelog.nodemap)
1130 remain = dict.fromkeys(self.changelog.nodemap)
1131
1131
1132 # prune everything remote has from the tree
1132 # prune everything remote has from the tree
1133 del remain[nullid]
1133 del remain[nullid]
1134 remove = base.keys()
1134 remove = base.keys()
1135 while remove:
1135 while remove:
1136 n = remove.pop(0)
1136 n = remove.pop(0)
1137 if n in remain:
1137 if n in remain:
1138 del remain[n]
1138 del remain[n]
1139 for p in self.changelog.parents(n):
1139 for p in self.changelog.parents(n):
1140 remove.append(p)
1140 remove.append(p)
1141
1141
1142 # find every node whose parents have been pruned
1142 # find every node whose parents have been pruned
1143 subset = []
1143 subset = []
1144 # find every remote head that will get new children
1144 # find every remote head that will get new children
1145 updated_heads = {}
1145 updated_heads = {}
1146 for n in remain:
1146 for n in remain:
1147 p1, p2 = self.changelog.parents(n)
1147 p1, p2 = self.changelog.parents(n)
1148 if p1 not in remain and p2 not in remain:
1148 if p1 not in remain and p2 not in remain:
1149 subset.append(n)
1149 subset.append(n)
1150 if heads:
1150 if heads:
1151 if p1 in heads:
1151 if p1 in heads:
1152 updated_heads[p1] = True
1152 updated_heads[p1] = True
1153 if p2 in heads:
1153 if p2 in heads:
1154 updated_heads[p2] = True
1154 updated_heads[p2] = True
1155
1155
1156 # this is the set of all roots we have to push
1156 # this is the set of all roots we have to push
1157 if heads:
1157 if heads:
1158 return subset, updated_heads.keys()
1158 return subset, updated_heads.keys()
1159 else:
1159 else:
1160 return subset
1160 return subset
1161
1161
1162 def pull(self, remote, heads=None, force=False):
1162 def pull(self, remote, heads=None, force=False):
1163 l = self.lock()
1163 l = self.lock()
1164
1164
1165 fetch = self.findincoming(remote, force=force)
1165 fetch = self.findincoming(remote, force=force)
1166 if fetch == [nullid]:
1166 if fetch == [nullid]:
1167 self.ui.status(_("requesting all changes\n"))
1167 self.ui.status(_("requesting all changes\n"))
1168
1168
1169 if not fetch:
1169 if not fetch:
1170 self.ui.status(_("no changes found\n"))
1170 self.ui.status(_("no changes found\n"))
1171 return 0
1171 return 0
1172
1172
1173 if heads is None:
1173 if heads is None:
1174 cg = remote.changegroup(fetch, 'pull')
1174 cg = remote.changegroup(fetch, 'pull')
1175 else:
1175 else:
1176 cg = remote.changegroupsubset(fetch, heads, 'pull')
1176 cg = remote.changegroupsubset(fetch, heads, 'pull')
1177 return self.addchangegroup(cg, 'pull')
1177 return self.addchangegroup(cg, 'pull')
1178
1178
1179 def push(self, remote, force=False, revs=None):
1179 def push(self, remote, force=False, revs=None):
1180 # there are two ways to push to remote repo:
1180 # there are two ways to push to remote repo:
1181 #
1181 #
1182 # addchangegroup assumes local user can lock remote
1182 # addchangegroup assumes local user can lock remote
1183 # repo (local filesystem, old ssh servers).
1183 # repo (local filesystem, old ssh servers).
1184 #
1184 #
1185 # unbundle assumes local user cannot lock remote repo (new ssh
1185 # unbundle assumes local user cannot lock remote repo (new ssh
1186 # servers, http servers).
1186 # servers, http servers).
1187
1187
1188 if remote.capable('unbundle'):
1188 if remote.capable('unbundle'):
1189 return self.push_unbundle(remote, force, revs)
1189 return self.push_unbundle(remote, force, revs)
1190 return self.push_addchangegroup(remote, force, revs)
1190 return self.push_addchangegroup(remote, force, revs)
1191
1191
1192 def prepush(self, remote, force, revs):
1192 def prepush(self, remote, force, revs):
1193 base = {}
1193 base = {}
1194 remote_heads = remote.heads()
1194 remote_heads = remote.heads()
1195 inc = self.findincoming(remote, base, remote_heads, force=force)
1195 inc = self.findincoming(remote, base, remote_heads, force=force)
1196 if not force and inc:
1196 if not force and inc:
1197 self.ui.warn(_("abort: unsynced remote changes!\n"))
1197 self.ui.warn(_("abort: unsynced remote changes!\n"))
1198 self.ui.status(_("(did you forget to sync?"
1198 self.ui.status(_("(did you forget to sync?"
1199 " use push -f to force)\n"))
1199 " use push -f to force)\n"))
1200 return None, 1
1200 return None, 1
1201
1201
1202 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1202 update, updated_heads = self.findoutgoing(remote, base, remote_heads)
1203 if revs is not None:
1203 if revs is not None:
1204 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1204 msng_cl, bases, heads = self.changelog.nodesbetween(update, revs)
1205 else:
1205 else:
1206 bases, heads = update, self.changelog.heads()
1206 bases, heads = update, self.changelog.heads()
1207
1207
1208 if not bases:
1208 if not bases:
1209 self.ui.status(_("no changes found\n"))
1209 self.ui.status(_("no changes found\n"))
1210 return None, 1
1210 return None, 1
1211 elif not force:
1211 elif not force:
1212 # FIXME we don't properly detect creation of new heads
1212 # FIXME we don't properly detect creation of new heads
1213 # in the push -r case, assume the user knows what he's doing
1213 # in the push -r case, assume the user knows what he's doing
1214 if not revs and len(remote_heads) < len(heads) \
1214 if not revs and len(remote_heads) < len(heads) \
1215 and remote_heads != [nullid]:
1215 and remote_heads != [nullid]:
1216 self.ui.warn(_("abort: push creates new remote branches!\n"))
1216 self.ui.warn(_("abort: push creates new remote branches!\n"))
1217 self.ui.status(_("(did you forget to merge?"
1217 self.ui.status(_("(did you forget to merge?"
1218 " use push -f to force)\n"))
1218 " use push -f to force)\n"))
1219 return None, 1
1219 return None, 1
1220
1220
1221 if revs is None:
1221 if revs is None:
1222 cg = self.changegroup(update, 'push')
1222 cg = self.changegroup(update, 'push')
1223 else:
1223 else:
1224 cg = self.changegroupsubset(update, revs, 'push')
1224 cg = self.changegroupsubset(update, revs, 'push')
1225 return cg, remote_heads
1225 return cg, remote_heads
1226
1226
1227 def push_addchangegroup(self, remote, force, revs):
1227 def push_addchangegroup(self, remote, force, revs):
1228 lock = remote.lock()
1228 lock = remote.lock()
1229
1229
1230 ret = self.prepush(remote, force, revs)
1230 ret = self.prepush(remote, force, revs)
1231 if ret[0] is not None:
1231 if ret[0] is not None:
1232 cg, remote_heads = ret
1232 cg, remote_heads = ret
1233 return remote.addchangegroup(cg, 'push')
1233 return remote.addchangegroup(cg, 'push')
1234 return ret[1]
1234 return ret[1]
1235
1235
1236 def push_unbundle(self, remote, force, revs):
1236 def push_unbundle(self, remote, force, revs):
1237 # local repo finds heads on server, finds out what revs it
1237 # local repo finds heads on server, finds out what revs it
1238 # must push. once revs transferred, if server finds it has
1238 # must push. once revs transferred, if server finds it has
1239 # different heads (someone else won commit/push race), server
1239 # different heads (someone else won commit/push race), server
1240 # aborts.
1240 # aborts.
1241
1241
1242 ret = self.prepush(remote, force, revs)
1242 ret = self.prepush(remote, force, revs)
1243 if ret[0] is not None:
1243 if ret[0] is not None:
1244 cg, remote_heads = ret
1244 cg, remote_heads = ret
1245 if force: remote_heads = ['force']
1245 if force: remote_heads = ['force']
1246 return remote.unbundle(cg, remote_heads, 'push')
1246 return remote.unbundle(cg, remote_heads, 'push')
1247 return ret[1]
1247 return ret[1]
1248
1248
1249 def changegroupsubset(self, bases, heads, source):
1249 def changegroupsubset(self, bases, heads, source):
1250 """This function generates a changegroup consisting of all the nodes
1250 """This function generates a changegroup consisting of all the nodes
1251 that are descendents of any of the bases, and ancestors of any of
1251 that are descendents of any of the bases, and ancestors of any of
1252 the heads.
1252 the heads.
1253
1253
1254 It is fairly complex as determining which filenodes and which
1254 It is fairly complex as determining which filenodes and which
1255 manifest nodes need to be included for the changeset to be complete
1255 manifest nodes need to be included for the changeset to be complete
1256 is non-trivial.
1256 is non-trivial.
1257
1257
1258 Another wrinkle is doing the reverse, figuring out which changeset in
1258 Another wrinkle is doing the reverse, figuring out which changeset in
1259 the changegroup a particular filenode or manifestnode belongs to."""
1259 the changegroup a particular filenode or manifestnode belongs to."""
1260
1260
1261 self.hook('preoutgoing', throw=True, source=source)
1261 self.hook('preoutgoing', throw=True, source=source)
1262
1262
1263 # Set up some initial variables
1263 # Set up some initial variables
1264 # Make it easy to refer to self.changelog
1264 # Make it easy to refer to self.changelog
1265 cl = self.changelog
1265 cl = self.changelog
1266 # msng is short for missing - compute the list of changesets in this
1266 # msng is short for missing - compute the list of changesets in this
1267 # changegroup.
1267 # changegroup.
1268 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1268 msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
1269 # Some bases may turn out to be superfluous, and some heads may be
1269 # Some bases may turn out to be superfluous, and some heads may be
1270 # too. nodesbetween will return the minimal set of bases and heads
1270 # too. nodesbetween will return the minimal set of bases and heads
1271 # necessary to re-create the changegroup.
1271 # necessary to re-create the changegroup.
1272
1272
1273 # Known heads are the list of heads that it is assumed the recipient
1273 # Known heads are the list of heads that it is assumed the recipient
1274 # of this changegroup will know about.
1274 # of this changegroup will know about.
1275 knownheads = {}
1275 knownheads = {}
1276 # We assume that all parents of bases are known heads.
1276 # We assume that all parents of bases are known heads.
1277 for n in bases:
1277 for n in bases:
1278 for p in cl.parents(n):
1278 for p in cl.parents(n):
1279 if p != nullid:
1279 if p != nullid:
1280 knownheads[p] = 1
1280 knownheads[p] = 1
1281 knownheads = knownheads.keys()
1281 knownheads = knownheads.keys()
1282 if knownheads:
1282 if knownheads:
1283 # Now that we know what heads are known, we can compute which
1283 # Now that we know what heads are known, we can compute which
1284 # changesets are known. The recipient must know about all
1284 # changesets are known. The recipient must know about all
1285 # changesets required to reach the known heads from the null
1285 # changesets required to reach the known heads from the null
1286 # changeset.
1286 # changeset.
1287 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1287 has_cl_set, junk, junk = cl.nodesbetween(None, knownheads)
1288 junk = None
1288 junk = None
1289 # Transform the list into an ersatz set.
1289 # Transform the list into an ersatz set.
1290 has_cl_set = dict.fromkeys(has_cl_set)
1290 has_cl_set = dict.fromkeys(has_cl_set)
1291 else:
1291 else:
1292 # If there were no known heads, the recipient cannot be assumed to
1292 # If there were no known heads, the recipient cannot be assumed to
1293 # know about any changesets.
1293 # know about any changesets.
1294 has_cl_set = {}
1294 has_cl_set = {}
1295
1295
1296 # Make it easy to refer to self.manifest
1296 # Make it easy to refer to self.manifest
1297 mnfst = self.manifest
1297 mnfst = self.manifest
1298 # We don't know which manifests are missing yet
1298 # We don't know which manifests are missing yet
1299 msng_mnfst_set = {}
1299 msng_mnfst_set = {}
1300 # Nor do we know which filenodes are missing.
1300 # Nor do we know which filenodes are missing.
1301 msng_filenode_set = {}
1301 msng_filenode_set = {}
1302
1302
1303 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1303 junk = mnfst.index[mnfst.count() - 1] # Get around a bug in lazyindex
1304 junk = None
1304 junk = None
1305
1305
1306 # A changeset always belongs to itself, so the changenode lookup
1306 # A changeset always belongs to itself, so the changenode lookup
1307 # function for a changenode is identity.
1307 # function for a changenode is identity.
1308 def identity(x):
1308 def identity(x):
1309 return x
1309 return x
1310
1310
1311 # A function generating function. Sets up an environment for the
1311 # A function generating function. Sets up an environment for the
1312 # inner function.
1312 # inner function.
1313 def cmp_by_rev_func(revlog):
1313 def cmp_by_rev_func(revlog):
1314 # Compare two nodes by their revision number in the environment's
1314 # Compare two nodes by their revision number in the environment's
1315 # revision history. Since the revision number both represents the
1315 # revision history. Since the revision number both represents the
1316 # most efficient order to read the nodes in, and represents a
1316 # most efficient order to read the nodes in, and represents a
1317 # topological sorting of the nodes, this function is often useful.
1317 # topological sorting of the nodes, this function is often useful.
1318 def cmp_by_rev(a, b):
1318 def cmp_by_rev(a, b):
1319 return cmp(revlog.rev(a), revlog.rev(b))
1319 return cmp(revlog.rev(a), revlog.rev(b))
1320 return cmp_by_rev
1320 return cmp_by_rev
1321
1321
1322 # If we determine that a particular file or manifest node must be a
1322 # If we determine that a particular file or manifest node must be a
1323 # node that the recipient of the changegroup will already have, we can
1323 # node that the recipient of the changegroup will already have, we can
1324 # also assume the recipient will have all the parents. This function
1324 # also assume the recipient will have all the parents. This function
1325 # prunes them from the set of missing nodes.
1325 # prunes them from the set of missing nodes.
1326 def prune_parents(revlog, hasset, msngset):
1326 def prune_parents(revlog, hasset, msngset):
1327 haslst = hasset.keys()
1327 haslst = hasset.keys()
1328 haslst.sort(cmp_by_rev_func(revlog))
1328 haslst.sort(cmp_by_rev_func(revlog))
1329 for node in haslst:
1329 for node in haslst:
1330 parentlst = [p for p in revlog.parents(node) if p != nullid]
1330 parentlst = [p for p in revlog.parents(node) if p != nullid]
1331 while parentlst:
1331 while parentlst:
1332 n = parentlst.pop()
1332 n = parentlst.pop()
1333 if n not in hasset:
1333 if n not in hasset:
1334 hasset[n] = 1
1334 hasset[n] = 1
1335 p = [p for p in revlog.parents(n) if p != nullid]
1335 p = [p for p in revlog.parents(n) if p != nullid]
1336 parentlst.extend(p)
1336 parentlst.extend(p)
1337 for n in hasset:
1337 for n in hasset:
1338 msngset.pop(n, None)
1338 msngset.pop(n, None)
1339
1339
1340 # This is a function generating function used to set up an environment
1340 # This is a function generating function used to set up an environment
1341 # for the inner function to execute in.
1341 # for the inner function to execute in.
1342 def manifest_and_file_collector(changedfileset):
1342 def manifest_and_file_collector(changedfileset):
1343 # This is an information gathering function that gathers
1343 # This is an information gathering function that gathers
1344 # information from each changeset node that goes out as part of
1344 # information from each changeset node that goes out as part of
1345 # the changegroup. The information gathered is a list of which
1345 # the changegroup. The information gathered is a list of which
1346 # manifest nodes are potentially required (the recipient may
1346 # manifest nodes are potentially required (the recipient may
1347 # already have them) and total list of all files which were
1347 # already have them) and total list of all files which were
1348 # changed in any changeset in the changegroup.
1348 # changed in any changeset in the changegroup.
1349 #
1349 #
1350 # We also remember the first changenode we saw any manifest
1350 # We also remember the first changenode we saw any manifest
1351 # referenced by so we can later determine which changenode 'owns'
1351 # referenced by so we can later determine which changenode 'owns'
1352 # the manifest.
1352 # the manifest.
1353 def collect_manifests_and_files(clnode):
1353 def collect_manifests_and_files(clnode):
1354 c = cl.read(clnode)
1354 c = cl.read(clnode)
1355 for f in c[3]:
1355 for f in c[3]:
1356 # This is to make sure we only have one instance of each
1356 # This is to make sure we only have one instance of each
1357 # filename string for each filename.
1357 # filename string for each filename.
1358 changedfileset.setdefault(f, f)
1358 changedfileset.setdefault(f, f)
1359 msng_mnfst_set.setdefault(c[0], clnode)
1359 msng_mnfst_set.setdefault(c[0], clnode)
1360 return collect_manifests_and_files
1360 return collect_manifests_and_files
1361
1361
1362 # Figure out which manifest nodes (of the ones we think might be part
1362 # Figure out which manifest nodes (of the ones we think might be part
1363 # of the changegroup) the recipient must know about and remove them
1363 # of the changegroup) the recipient must know about and remove them
1364 # from the changegroup.
1364 # from the changegroup.
1365 def prune_manifests():
1365 def prune_manifests():
1366 has_mnfst_set = {}
1366 has_mnfst_set = {}
1367 for n in msng_mnfst_set:
1367 for n in msng_mnfst_set:
1368 # If a 'missing' manifest thinks it belongs to a changenode
1368 # If a 'missing' manifest thinks it belongs to a changenode
1369 # the recipient is assumed to have, obviously the recipient
1369 # the recipient is assumed to have, obviously the recipient
1370 # must have that manifest.
1370 # must have that manifest.
1371 linknode = cl.node(mnfst.linkrev(n))
1371 linknode = cl.node(mnfst.linkrev(n))
1372 if linknode in has_cl_set:
1372 if linknode in has_cl_set:
1373 has_mnfst_set[n] = 1
1373 has_mnfst_set[n] = 1
1374 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1374 prune_parents(mnfst, has_mnfst_set, msng_mnfst_set)
1375
1375
1376 # Use the information collected in collect_manifests_and_files to say
1376 # Use the information collected in collect_manifests_and_files to say
1377 # which changenode any manifestnode belongs to.
1377 # which changenode any manifestnode belongs to.
1378 def lookup_manifest_link(mnfstnode):
1378 def lookup_manifest_link(mnfstnode):
1379 return msng_mnfst_set[mnfstnode]
1379 return msng_mnfst_set[mnfstnode]
1380
1380
1381 # A function generating function that sets up the initial environment
1381 # A function generating function that sets up the initial environment
1382 # the inner function.
1382 # the inner function.
1383 def filenode_collector(changedfiles):
1383 def filenode_collector(changedfiles):
1384 next_rev = [0]
1384 next_rev = [0]
1385 # This gathers information from each manifestnode included in the
1385 # This gathers information from each manifestnode included in the
1386 # changegroup about which filenodes the manifest node references
1386 # changegroup about which filenodes the manifest node references
1387 # so we can include those in the changegroup too.
1387 # so we can include those in the changegroup too.
1388 #
1388 #
1389 # It also remembers which changenode each filenode belongs to. It
1389 # It also remembers which changenode each filenode belongs to. It
1390 # does this by assuming the a filenode belongs to the changenode
1390 # does this by assuming the a filenode belongs to the changenode
1391 # the first manifest that references it belongs to.
1391 # the first manifest that references it belongs to.
1392 def collect_msng_filenodes(mnfstnode):
1392 def collect_msng_filenodes(mnfstnode):
1393 r = mnfst.rev(mnfstnode)
1393 r = mnfst.rev(mnfstnode)
1394 if r == next_rev[0]:
1394 if r == next_rev[0]:
1395 # If the last rev we looked at was the one just previous,
1395 # If the last rev we looked at was the one just previous,
1396 # we only need to see a diff.
1396 # we only need to see a diff.
1397 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1397 delta = mdiff.patchtext(mnfst.delta(mnfstnode))
1398 # For each line in the delta
1398 # For each line in the delta
1399 for dline in delta.splitlines():
1399 for dline in delta.splitlines():
1400 # get the filename and filenode for that line
1400 # get the filename and filenode for that line
1401 f, fnode = dline.split('\0')
1401 f, fnode = dline.split('\0')
1402 fnode = bin(fnode[:40])
1402 fnode = bin(fnode[:40])
1403 f = changedfiles.get(f, None)
1403 f = changedfiles.get(f, None)
1404 # And if the file is in the list of files we care
1404 # And if the file is in the list of files we care
1405 # about.
1405 # about.
1406 if f is not None:
1406 if f is not None:
1407 # Get the changenode this manifest belongs to
1407 # Get the changenode this manifest belongs to
1408 clnode = msng_mnfst_set[mnfstnode]
1408 clnode = msng_mnfst_set[mnfstnode]
1409 # Create the set of filenodes for the file if
1409 # Create the set of filenodes for the file if
1410 # there isn't one already.
1410 # there isn't one already.
1411 ndset = msng_filenode_set.setdefault(f, {})
1411 ndset = msng_filenode_set.setdefault(f, {})
1412 # And set the filenode's changelog node to the
1412 # And set the filenode's changelog node to the
1413 # manifest's if it hasn't been set already.
1413 # manifest's if it hasn't been set already.
1414 ndset.setdefault(fnode, clnode)
1414 ndset.setdefault(fnode, clnode)
1415 else:
1415 else:
1416 # Otherwise we need a full manifest.
1416 # Otherwise we need a full manifest.
1417 m = mnfst.read(mnfstnode)
1417 m = mnfst.read(mnfstnode)
1418 # For every file in we care about.
1418 # For every file in we care about.
1419 for f in changedfiles:
1419 for f in changedfiles:
1420 fnode = m.get(f, None)
1420 fnode = m.get(f, None)
1421 # If it's in the manifest
1421 # If it's in the manifest
1422 if fnode is not None:
1422 if fnode is not None:
1423 # See comments above.
1423 # See comments above.
1424 clnode = msng_mnfst_set[mnfstnode]
1424 clnode = msng_mnfst_set[mnfstnode]
1425 ndset = msng_filenode_set.setdefault(f, {})
1425 ndset = msng_filenode_set.setdefault(f, {})
1426 ndset.setdefault(fnode, clnode)
1426 ndset.setdefault(fnode, clnode)
1427 # Remember the revision we hope to see next.
1427 # Remember the revision we hope to see next.
1428 next_rev[0] = r + 1
1428 next_rev[0] = r + 1
1429 return collect_msng_filenodes
1429 return collect_msng_filenodes
1430
1430
1431 # We have a list of filenodes we think we need for a file, lets remove
1431 # We have a list of filenodes we think we need for a file, lets remove
1432 # all those we now the recipient must have.
1432 # all those we now the recipient must have.
1433 def prune_filenodes(f, filerevlog):
1433 def prune_filenodes(f, filerevlog):
1434 msngset = msng_filenode_set[f]
1434 msngset = msng_filenode_set[f]
1435 hasset = {}
1435 hasset = {}
1436 # If a 'missing' filenode thinks it belongs to a changenode we
1436 # If a 'missing' filenode thinks it belongs to a changenode we
1437 # assume the recipient must have, then the recipient must have
1437 # assume the recipient must have, then the recipient must have
1438 # that filenode.
1438 # that filenode.
1439 for n in msngset:
1439 for n in msngset:
1440 clnode = cl.node(filerevlog.linkrev(n))
1440 clnode = cl.node(filerevlog.linkrev(n))
1441 if clnode in has_cl_set:
1441 if clnode in has_cl_set:
1442 hasset[n] = 1
1442 hasset[n] = 1
1443 prune_parents(filerevlog, hasset, msngset)
1443 prune_parents(filerevlog, hasset, msngset)
1444
1444
1445 # A function generator function that sets up the a context for the
1445 # A function generator function that sets up the a context for the
1446 # inner function.
1446 # inner function.
1447 def lookup_filenode_link_func(fname):
1447 def lookup_filenode_link_func(fname):
1448 msngset = msng_filenode_set[fname]
1448 msngset = msng_filenode_set[fname]
1449 # Lookup the changenode the filenode belongs to.
1449 # Lookup the changenode the filenode belongs to.
1450 def lookup_filenode_link(fnode):
1450 def lookup_filenode_link(fnode):
1451 return msngset[fnode]
1451 return msngset[fnode]
1452 return lookup_filenode_link
1452 return lookup_filenode_link
1453
1453
1454 # Now that we have all theses utility functions to help out and
1454 # Now that we have all theses utility functions to help out and
1455 # logically divide up the task, generate the group.
1455 # logically divide up the task, generate the group.
1456 def gengroup():
1456 def gengroup():
1457 # The set of changed files starts empty.
1457 # The set of changed files starts empty.
1458 changedfiles = {}
1458 changedfiles = {}
1459 # Create a changenode group generator that will call our functions
1459 # Create a changenode group generator that will call our functions
1460 # back to lookup the owning changenode and collect information.
1460 # back to lookup the owning changenode and collect information.
1461 group = cl.group(msng_cl_lst, identity,
1461 group = cl.group(msng_cl_lst, identity,
1462 manifest_and_file_collector(changedfiles))
1462 manifest_and_file_collector(changedfiles))
1463 for chnk in group:
1463 for chnk in group:
1464 yield chnk
1464 yield chnk
1465
1465
1466 # The list of manifests has been collected by the generator
1466 # The list of manifests has been collected by the generator
1467 # calling our functions back.
1467 # calling our functions back.
1468 prune_manifests()
1468 prune_manifests()
1469 msng_mnfst_lst = msng_mnfst_set.keys()
1469 msng_mnfst_lst = msng_mnfst_set.keys()
1470 # Sort the manifestnodes by revision number.
1470 # Sort the manifestnodes by revision number.
1471 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1471 msng_mnfst_lst.sort(cmp_by_rev_func(mnfst))
1472 # Create a generator for the manifestnodes that calls our lookup
1472 # Create a generator for the manifestnodes that calls our lookup
1473 # and data collection functions back.
1473 # and data collection functions back.
1474 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1474 group = mnfst.group(msng_mnfst_lst, lookup_manifest_link,
1475 filenode_collector(changedfiles))
1475 filenode_collector(changedfiles))
1476 for chnk in group:
1476 for chnk in group:
1477 yield chnk
1477 yield chnk
1478
1478
1479 # These are no longer needed, dereference and toss the memory for
1479 # These are no longer needed, dereference and toss the memory for
1480 # them.
1480 # them.
1481 msng_mnfst_lst = None
1481 msng_mnfst_lst = None
1482 msng_mnfst_set.clear()
1482 msng_mnfst_set.clear()
1483
1483
1484 changedfiles = changedfiles.keys()
1484 changedfiles = changedfiles.keys()
1485 changedfiles.sort()
1485 changedfiles.sort()
1486 # Go through all our files in order sorted by name.
1486 # Go through all our files in order sorted by name.
1487 for fname in changedfiles:
1487 for fname in changedfiles:
1488 filerevlog = self.file(fname)
1488 filerevlog = self.file(fname)
1489 # Toss out the filenodes that the recipient isn't really
1489 # Toss out the filenodes that the recipient isn't really
1490 # missing.
1490 # missing.
1491 if msng_filenode_set.has_key(fname):
1491 if msng_filenode_set.has_key(fname):
1492 prune_filenodes(fname, filerevlog)
1492 prune_filenodes(fname, filerevlog)
1493 msng_filenode_lst = msng_filenode_set[fname].keys()
1493 msng_filenode_lst = msng_filenode_set[fname].keys()
1494 else:
1494 else:
1495 msng_filenode_lst = []
1495 msng_filenode_lst = []
1496 # If any filenodes are left, generate the group for them,
1496 # If any filenodes are left, generate the group for them,
1497 # otherwise don't bother.
1497 # otherwise don't bother.
1498 if len(msng_filenode_lst) > 0:
1498 if len(msng_filenode_lst) > 0:
1499 yield changegroup.genchunk(fname)
1499 yield changegroup.genchunk(fname)
1500 # Sort the filenodes by their revision #
1500 # Sort the filenodes by their revision #
1501 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1501 msng_filenode_lst.sort(cmp_by_rev_func(filerevlog))
1502 # Create a group generator and only pass in a changenode
1502 # Create a group generator and only pass in a changenode
1503 # lookup function as we need to collect no information
1503 # lookup function as we need to collect no information
1504 # from filenodes.
1504 # from filenodes.
1505 group = filerevlog.group(msng_filenode_lst,
1505 group = filerevlog.group(msng_filenode_lst,
1506 lookup_filenode_link_func(fname))
1506 lookup_filenode_link_func(fname))
1507 for chnk in group:
1507 for chnk in group:
1508 yield chnk
1508 yield chnk
1509 if msng_filenode_set.has_key(fname):
1509 if msng_filenode_set.has_key(fname):
1510 # Don't need this anymore, toss it to free memory.
1510 # Don't need this anymore, toss it to free memory.
1511 del msng_filenode_set[fname]
1511 del msng_filenode_set[fname]
1512 # Signal that no more groups are left.
1512 # Signal that no more groups are left.
1513 yield changegroup.closechunk()
1513 yield changegroup.closechunk()
1514
1514
1515 if msng_cl_lst:
1515 if msng_cl_lst:
1516 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1516 self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source)
1517
1517
1518 return util.chunkbuffer(gengroup())
1518 return util.chunkbuffer(gengroup())
1519
1519
1520 def changegroup(self, basenodes, source):
1520 def changegroup(self, basenodes, source):
1521 """Generate a changegroup of all nodes that we have that a recipient
1521 """Generate a changegroup of all nodes that we have that a recipient
1522 doesn't.
1522 doesn't.
1523
1523
1524 This is much easier than the previous function as we can assume that
1524 This is much easier than the previous function as we can assume that
1525 the recipient has any changenode we aren't sending them."""
1525 the recipient has any changenode we aren't sending them."""
1526
1526
1527 self.hook('preoutgoing', throw=True, source=source)
1527 self.hook('preoutgoing', throw=True, source=source)
1528
1528
1529 cl = self.changelog
1529 cl = self.changelog
1530 nodes = cl.nodesbetween(basenodes, None)[0]
1530 nodes = cl.nodesbetween(basenodes, None)[0]
1531 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1531 revset = dict.fromkeys([cl.rev(n) for n in nodes])
1532
1532
1533 def identity(x):
1533 def identity(x):
1534 return x
1534 return x
1535
1535
1536 def gennodelst(revlog):
1536 def gennodelst(revlog):
1537 for r in xrange(0, revlog.count()):
1537 for r in xrange(0, revlog.count()):
1538 n = revlog.node(r)
1538 n = revlog.node(r)
1539 if revlog.linkrev(n) in revset:
1539 if revlog.linkrev(n) in revset:
1540 yield n
1540 yield n
1541
1541
1542 def changed_file_collector(changedfileset):
1542 def changed_file_collector(changedfileset):
1543 def collect_changed_files(clnode):
1543 def collect_changed_files(clnode):
1544 c = cl.read(clnode)
1544 c = cl.read(clnode)
1545 for fname in c[3]:
1545 for fname in c[3]:
1546 changedfileset[fname] = 1
1546 changedfileset[fname] = 1
1547 return collect_changed_files
1547 return collect_changed_files
1548
1548
1549 def lookuprevlink_func(revlog):
1549 def lookuprevlink_func(revlog):
1550 def lookuprevlink(n):
1550 def lookuprevlink(n):
1551 return cl.node(revlog.linkrev(n))
1551 return cl.node(revlog.linkrev(n))
1552 return lookuprevlink
1552 return lookuprevlink
1553
1553
1554 def gengroup():
1554 def gengroup():
1555 # construct a list of all changed files
1555 # construct a list of all changed files
1556 changedfiles = {}
1556 changedfiles = {}
1557
1557
1558 for chnk in cl.group(nodes, identity,
1558 for chnk in cl.group(nodes, identity,
1559 changed_file_collector(changedfiles)):
1559 changed_file_collector(changedfiles)):
1560 yield chnk
1560 yield chnk
1561 changedfiles = changedfiles.keys()
1561 changedfiles = changedfiles.keys()
1562 changedfiles.sort()
1562 changedfiles.sort()
1563
1563
1564 mnfst = self.manifest
1564 mnfst = self.manifest
1565 nodeiter = gennodelst(mnfst)
1565 nodeiter = gennodelst(mnfst)
1566 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1566 for chnk in mnfst.group(nodeiter, lookuprevlink_func(mnfst)):
1567 yield chnk
1567 yield chnk
1568
1568
1569 for fname in changedfiles:
1569 for fname in changedfiles:
1570 filerevlog = self.file(fname)
1570 filerevlog = self.file(fname)
1571 nodeiter = gennodelst(filerevlog)
1571 nodeiter = gennodelst(filerevlog)
1572 nodeiter = list(nodeiter)
1572 nodeiter = list(nodeiter)
1573 if nodeiter:
1573 if nodeiter:
1574 yield changegroup.genchunk(fname)
1574 yield changegroup.genchunk(fname)
1575 lookup = lookuprevlink_func(filerevlog)
1575 lookup = lookuprevlink_func(filerevlog)
1576 for chnk in filerevlog.group(nodeiter, lookup):
1576 for chnk in filerevlog.group(nodeiter, lookup):
1577 yield chnk
1577 yield chnk
1578
1578
1579 yield changegroup.closechunk()
1579 yield changegroup.closechunk()
1580
1580
1581 if nodes:
1581 if nodes:
1582 self.hook('outgoing', node=hex(nodes[0]), source=source)
1582 self.hook('outgoing', node=hex(nodes[0]), source=source)
1583
1583
1584 return util.chunkbuffer(gengroup())
1584 return util.chunkbuffer(gengroup())
1585
1585
1586 def addchangegroup(self, source, srctype):
1586 def addchangegroup(self, source, srctype):
1587 """add changegroup to repo.
1587 """add changegroup to repo.
1588 returns number of heads modified or added + 1."""
1588 returns number of heads modified or added + 1."""
1589
1589
1590 def csmap(x):
1590 def csmap(x):
1591 self.ui.debug(_("add changeset %s\n") % short(x))
1591 self.ui.debug(_("add changeset %s\n") % short(x))
1592 return cl.count()
1592 return cl.count()
1593
1593
1594 def revmap(x):
1594 def revmap(x):
1595 return cl.rev(x)
1595 return cl.rev(x)
1596
1596
1597 if not source:
1597 if not source:
1598 return 0
1598 return 0
1599
1599
1600 self.hook('prechangegroup', throw=True, source=srctype)
1600 self.hook('prechangegroup', throw=True, source=srctype)
1601
1601
1602 changesets = files = revisions = 0
1602 changesets = files = revisions = 0
1603
1603
1604 tr = self.transaction()
1604 tr = self.transaction()
1605
1605
1606 # write changelog data to temp files so concurrent readers will not see
1606 # write changelog data to temp files so concurrent readers will not see
1607 # inconsistent view
1607 # inconsistent view
1608 cl = None
1608 cl = None
1609 try:
1609 try:
1610 cl = appendfile.appendchangelog(self.opener, self.changelog.version)
1610 cl = appendfile.appendchangelog(self.opener, self.changelog.version)
1611
1611
1612 oldheads = len(cl.heads())
1612 oldheads = len(cl.heads())
1613
1613
1614 # pull off the changeset group
1614 # pull off the changeset group
1615 self.ui.status(_("adding changesets\n"))
1615 self.ui.status(_("adding changesets\n"))
1616 cor = cl.count() - 1
1616 cor = cl.count() - 1
1617 chunkiter = changegroup.chunkiter(source)
1617 chunkiter = changegroup.chunkiter(source)
1618 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1618 if cl.addgroup(chunkiter, csmap, tr, 1) is None:
1619 raise util.Abort(_("received changelog group is empty"))
1619 raise util.Abort(_("received changelog group is empty"))
1620 cnr = cl.count() - 1
1620 cnr = cl.count() - 1
1621 changesets = cnr - cor
1621 changesets = cnr - cor
1622
1622
1623 # pull off the manifest group
1623 # pull off the manifest group
1624 self.ui.status(_("adding manifests\n"))
1624 self.ui.status(_("adding manifests\n"))
1625 chunkiter = changegroup.chunkiter(source)
1625 chunkiter = changegroup.chunkiter(source)
1626 # no need to check for empty manifest group here:
1626 # no need to check for empty manifest group here:
1627 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1627 # if the result of the merge of 1 and 2 is the same in 3 and 4,
1628 # no new manifest will be created and the manifest group will
1628 # no new manifest will be created and the manifest group will
1629 # be empty during the pull
1629 # be empty during the pull
1630 self.manifest.addgroup(chunkiter, revmap, tr)
1630 self.manifest.addgroup(chunkiter, revmap, tr)
1631
1631
1632 # process the files
1632 # process the files
1633 self.ui.status(_("adding file changes\n"))
1633 self.ui.status(_("adding file changes\n"))
1634 while 1:
1634 while 1:
1635 f = changegroup.getchunk(source)
1635 f = changegroup.getchunk(source)
1636 if not f:
1636 if not f:
1637 break
1637 break
1638 self.ui.debug(_("adding %s revisions\n") % f)
1638 self.ui.debug(_("adding %s revisions\n") % f)
1639 fl = self.file(f)
1639 fl = self.file(f)
1640 o = fl.count()
1640 o = fl.count()
1641 chunkiter = changegroup.chunkiter(source)
1641 chunkiter = changegroup.chunkiter(source)
1642 if fl.addgroup(chunkiter, revmap, tr) is None:
1642 if fl.addgroup(chunkiter, revmap, tr) is None:
1643 raise util.Abort(_("received file revlog group is empty"))
1643 raise util.Abort(_("received file revlog group is empty"))
1644 revisions += fl.count() - o
1644 revisions += fl.count() - o
1645 files += 1
1645 files += 1
1646
1646
1647 cl.writedata()
1647 cl.writedata()
1648 finally:
1648 finally:
1649 if cl:
1649 if cl:
1650 cl.cleanup()
1650 cl.cleanup()
1651
1651
1652 # make changelog see real files again
1652 # make changelog see real files again
1653 self.changelog = changelog.changelog(self.opener, self.changelog.version)
1653 self.changelog = changelog.changelog(self.opener, self.changelog.version)
1654 self.changelog.checkinlinesize(tr)
1654 self.changelog.checkinlinesize(tr)
1655
1655
1656 newheads = len(self.changelog.heads())
1656 newheads = len(self.changelog.heads())
1657 heads = ""
1657 heads = ""
1658 if oldheads and newheads != oldheads:
1658 if oldheads and newheads != oldheads:
1659 heads = _(" (%+d heads)") % (newheads - oldheads)
1659 heads = _(" (%+d heads)") % (newheads - oldheads)
1660
1660
1661 self.ui.status(_("added %d changesets"
1661 self.ui.status(_("added %d changesets"
1662 " with %d changes to %d files%s\n")
1662 " with %d changes to %d files%s\n")
1663 % (changesets, revisions, files, heads))
1663 % (changesets, revisions, files, heads))
1664
1664
1665 if changesets > 0:
1665 if changesets > 0:
1666 self.hook('pretxnchangegroup', throw=True,
1666 self.hook('pretxnchangegroup', throw=True,
1667 node=hex(self.changelog.node(cor+1)), source=srctype)
1667 node=hex(self.changelog.node(cor+1)), source=srctype)
1668
1668
1669 tr.close()
1669 tr.close()
1670
1670
1671 if changesets > 0:
1671 if changesets > 0:
1672 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1672 self.hook("changegroup", node=hex(self.changelog.node(cor+1)),
1673 source=srctype)
1673 source=srctype)
1674
1674
1675 for i in range(cor + 1, cnr + 1):
1675 for i in range(cor + 1, cnr + 1):
1676 self.hook("incoming", node=hex(self.changelog.node(i)),
1676 self.hook("incoming", node=hex(self.changelog.node(i)),
1677 source=srctype)
1677 source=srctype)
1678
1678
1679 return newheads - oldheads + 1
1679 return newheads - oldheads + 1
1680
1680
1681 def update(self, node, allow=False, force=False, choose=None,
1681 def update(self, node, allow=False, force=False, choose=None,
1682 moddirstate=True, forcemerge=False, wlock=None, show_stats=True):
1682 moddirstate=True, forcemerge=False, wlock=None, show_stats=True):
1683 pl = self.dirstate.parents()
1683 pl = self.dirstate.parents()
1684 if not force and pl[1] != nullid:
1684 if not force and pl[1] != nullid:
1685 raise util.Abort(_("outstanding uncommitted merges"))
1685 raise util.Abort(_("outstanding uncommitted merges"))
1686
1686
1687 err = False
1687 err = False
1688
1688
1689 p1, p2 = pl[0], node
1689 p1, p2 = pl[0], node
1690 pa = self.changelog.ancestor(p1, p2)
1690 pa = self.changelog.ancestor(p1, p2)
1691 m1n = self.changelog.read(p1)[0]
1691 m1n = self.changelog.read(p1)[0]
1692 m2n = self.changelog.read(p2)[0]
1692 m2n = self.changelog.read(p2)[0]
1693 man = self.manifest.ancestor(m1n, m2n)
1693 man = self.manifest.ancestor(m1n, m2n)
1694 m1 = self.manifest.read(m1n)
1694 m1 = self.manifest.read(m1n)
1695 mf1 = self.manifest.readflags(m1n)
1695 mf1 = self.manifest.readflags(m1n)
1696 m2 = self.manifest.read(m2n).copy()
1696 m2 = self.manifest.read(m2n).copy()
1697 mf2 = self.manifest.readflags(m2n)
1697 mf2 = self.manifest.readflags(m2n)
1698 ma = self.manifest.read(man)
1698 ma = self.manifest.read(man)
1699 mfa = self.manifest.readflags(man)
1699 mfa = self.manifest.readflags(man)
1700
1700
1701 modified, added, removed, deleted, unknown = self.changes()
1701 modified, added, removed, deleted, unknown = self.changes()
1702
1702
1703 # is this a jump, or a merge? i.e. is there a linear path
1703 # is this a jump, or a merge? i.e. is there a linear path
1704 # from p1 to p2?
1704 # from p1 to p2?
1705 linear_path = (pa == p1 or pa == p2)
1705 linear_path = (pa == p1 or pa == p2)
1706
1706
1707 if allow and linear_path:
1707 if allow and linear_path:
1708 raise util.Abort(_("there is nothing to merge, just use "
1708 raise util.Abort(_("there is nothing to merge, just use "
1709 "'hg update' or look at 'hg heads'"))
1709 "'hg update' or look at 'hg heads'"))
1710 if allow and not forcemerge:
1710 if allow and not forcemerge:
1711 if modified or added or removed:
1711 if modified or added or removed:
1712 raise util.Abort(_("outstanding uncommitted changes"))
1712 raise util.Abort(_("outstanding uncommitted changes"))
1713
1713
1714 if not forcemerge and not force:
1714 if not forcemerge and not force:
1715 for f in unknown:
1715 for f in unknown:
1716 if f in m2:
1716 if f in m2:
1717 t1 = self.wread(f)
1717 t1 = self.wread(f)
1718 t2 = self.file(f).read(m2[f])
1718 t2 = self.file(f).read(m2[f])
1719 if cmp(t1, t2) != 0:
1719 if cmp(t1, t2) != 0:
1720 raise util.Abort(_("'%s' already exists in the working"
1720 raise util.Abort(_("'%s' already exists in the working"
1721 " dir and differs from remote") % f)
1721 " dir and differs from remote") % f)
1722
1722
1723 # resolve the manifest to determine which files
1723 # resolve the manifest to determine which files
1724 # we care about merging
1724 # we care about merging
1725 self.ui.note(_("resolving manifests\n"))
1725 self.ui.note(_("resolving manifests\n"))
1726 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1726 self.ui.debug(_(" force %s allow %s moddirstate %s linear %s\n") %
1727 (force, allow, moddirstate, linear_path))
1727 (force, allow, moddirstate, linear_path))
1728 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1728 self.ui.debug(_(" ancestor %s local %s remote %s\n") %
1729 (short(man), short(m1n), short(m2n)))
1729 (short(man), short(m1n), short(m2n)))
1730
1730
1731 merge = {}
1731 merge = {}
1732 get = {}
1732 get = {}
1733 remove = []
1733 remove = []
1734
1734
1735 # construct a working dir manifest
1735 # construct a working dir manifest
1736 mw = m1.copy()
1736 mw = m1.copy()
1737 mfw = mf1.copy()
1737 mfw = mf1.copy()
1738 umap = dict.fromkeys(unknown)
1738 umap = dict.fromkeys(unknown)
1739
1739
1740 for f in added + modified + unknown:
1740 for f in added + modified + unknown:
1741 mw[f] = ""
1741 mw[f] = ""
1742 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1742 mfw[f] = util.is_exec(self.wjoin(f), mfw.get(f, False))
1743
1743
1744 if moddirstate and not wlock:
1744 if moddirstate and not wlock:
1745 wlock = self.wlock()
1745 wlock = self.wlock()
1746
1746
1747 for f in deleted + removed:
1747 for f in deleted + removed:
1748 if f in mw:
1748 if f in mw:
1749 del mw[f]
1749 del mw[f]
1750
1750
1751 # If we're jumping between revisions (as opposed to merging),
1751 # If we're jumping between revisions (as opposed to merging),
1752 # and if neither the working directory nor the target rev has
1752 # and if neither the working directory nor the target rev has
1753 # the file, then we need to remove it from the dirstate, to
1753 # the file, then we need to remove it from the dirstate, to
1754 # prevent the dirstate from listing the file when it is no
1754 # prevent the dirstate from listing the file when it is no
1755 # longer in the manifest.
1755 # longer in the manifest.
1756 if moddirstate and linear_path and f not in m2:
1756 if moddirstate and linear_path and f not in m2:
1757 self.dirstate.forget((f,))
1757 self.dirstate.forget((f,))
1758
1758
1759 # Compare manifests
1759 # Compare manifests
1760 for f, n in mw.iteritems():
1760 for f, n in mw.iteritems():
1761 if choose and not choose(f):
1761 if choose and not choose(f):
1762 continue
1762 continue
1763 if f in m2:
1763 if f in m2:
1764 s = 0
1764 s = 0
1765
1765
1766 # is the wfile new since m1, and match m2?
1766 # is the wfile new since m1, and match m2?
1767 if f not in m1:
1767 if f not in m1:
1768 t1 = self.wread(f)
1768 t1 = self.wread(f)
1769 t2 = self.file(f).read(m2[f])
1769 t2 = self.file(f).read(m2[f])
1770 if cmp(t1, t2) == 0:
1770 if cmp(t1, t2) == 0:
1771 n = m2[f]
1771 n = m2[f]
1772 del t1, t2
1772 del t1, t2
1773
1773
1774 # are files different?
1774 # are files different?
1775 if n != m2[f]:
1775 if n != m2[f]:
1776 a = ma.get(f, nullid)
1776 a = ma.get(f, nullid)
1777 # are both different from the ancestor?
1777 # are both different from the ancestor?
1778 if n != a and m2[f] != a:
1778 if n != a and m2[f] != a:
1779 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1779 self.ui.debug(_(" %s versions differ, resolve\n") % f)
1780 # merge executable bits
1780 # merge executable bits
1781 # "if we changed or they changed, change in merge"
1781 # "if we changed or they changed, change in merge"
1782 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1782 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1783 mode = ((a^b) | (a^c)) ^ a
1783 mode = ((a^b) | (a^c)) ^ a
1784 merge[f] = (m1.get(f, nullid), m2[f], mode)
1784 merge[f] = (m1.get(f, nullid), m2[f], mode)
1785 s = 1
1785 s = 1
1786 # are we clobbering?
1786 # are we clobbering?
1787 # is remote's version newer?
1787 # is remote's version newer?
1788 # or are we going back in time?
1788 # or are we going back in time?
1789 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1789 elif force or m2[f] != a or (p2 == pa and mw[f] == m1[f]):
1790 self.ui.debug(_(" remote %s is newer, get\n") % f)
1790 self.ui.debug(_(" remote %s is newer, get\n") % f)
1791 get[f] = m2[f]
1791 get[f] = m2[f]
1792 s = 1
1792 s = 1
1793 elif f in umap or f in added:
1793 elif f in umap or f in added:
1794 # this unknown file is the same as the checkout
1794 # this unknown file is the same as the checkout
1795 # we need to reset the dirstate if the file was added
1795 # we need to reset the dirstate if the file was added
1796 get[f] = m2[f]
1796 get[f] = m2[f]
1797
1797
1798 if not s and mfw[f] != mf2[f]:
1798 if not s and mfw[f] != mf2[f]:
1799 if force:
1799 if force:
1800 self.ui.debug(_(" updating permissions for %s\n") % f)
1800 self.ui.debug(_(" updating permissions for %s\n") % f)
1801 util.set_exec(self.wjoin(f), mf2[f])
1801 util.set_exec(self.wjoin(f), mf2[f])
1802 else:
1802 else:
1803 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1803 a, b, c = mfa.get(f, 0), mfw[f], mf2[f]
1804 mode = ((a^b) | (a^c)) ^ a
1804 mode = ((a^b) | (a^c)) ^ a
1805 if mode != b:
1805 if mode != b:
1806 self.ui.debug(_(" updating permissions for %s\n")
1806 self.ui.debug(_(" updating permissions for %s\n")
1807 % f)
1807 % f)
1808 util.set_exec(self.wjoin(f), mode)
1808 util.set_exec(self.wjoin(f), mode)
1809 del m2[f]
1809 del m2[f]
1810 elif f in ma:
1810 elif f in ma:
1811 if n != ma[f]:
1811 if n != ma[f]:
1812 r = _("d")
1812 r = _("d")
1813 if not force and (linear_path or allow):
1813 if not force and (linear_path or allow):
1814 r = self.ui.prompt(
1814 r = self.ui.prompt(
1815 (_(" local changed %s which remote deleted\n") % f) +
1815 (_(" local changed %s which remote deleted\n") % f) +
1816 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1816 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1817 if r == _("d"):
1817 if r == _("d"):
1818 remove.append(f)
1818 remove.append(f)
1819 else:
1819 else:
1820 self.ui.debug(_("other deleted %s\n") % f)
1820 self.ui.debug(_("other deleted %s\n") % f)
1821 remove.append(f) # other deleted it
1821 remove.append(f) # other deleted it
1822 else:
1822 else:
1823 # file is created on branch or in working directory
1823 # file is created on branch or in working directory
1824 if force and f not in umap:
1824 if force and f not in umap:
1825 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1825 self.ui.debug(_("remote deleted %s, clobbering\n") % f)
1826 remove.append(f)
1826 remove.append(f)
1827 elif n == m1.get(f, nullid): # same as parent
1827 elif n == m1.get(f, nullid): # same as parent
1828 if p2 == pa: # going backwards?
1828 if p2 == pa: # going backwards?
1829 self.ui.debug(_("remote deleted %s\n") % f)
1829 self.ui.debug(_("remote deleted %s\n") % f)
1830 remove.append(f)
1830 remove.append(f)
1831 else:
1831 else:
1832 self.ui.debug(_("local modified %s, keeping\n") % f)
1832 self.ui.debug(_("local modified %s, keeping\n") % f)
1833 else:
1833 else:
1834 self.ui.debug(_("working dir created %s, keeping\n") % f)
1834 self.ui.debug(_("working dir created %s, keeping\n") % f)
1835
1835
1836 for f, n in m2.iteritems():
1836 for f, n in m2.iteritems():
1837 if choose and not choose(f):
1837 if choose and not choose(f):
1838 continue
1838 continue
1839 if f[0] == "/":
1839 if f[0] == "/":
1840 continue
1840 continue
1841 if f in ma and n != ma[f]:
1841 if f in ma and n != ma[f]:
1842 r = _("k")
1842 r = _("k")
1843 if not force and (linear_path or allow):
1843 if not force and (linear_path or allow):
1844 r = self.ui.prompt(
1844 r = self.ui.prompt(
1845 (_("remote changed %s which local deleted\n") % f) +
1845 (_("remote changed %s which local deleted\n") % f) +
1846 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1846 _("(k)eep or (d)elete?"), _("[kd]"), _("k"))
1847 if r == _("k"):
1847 if r == _("k"):
1848 get[f] = n
1848 get[f] = n
1849 elif f not in ma:
1849 elif f not in ma:
1850 self.ui.debug(_("remote created %s\n") % f)
1850 self.ui.debug(_("remote created %s\n") % f)
1851 get[f] = n
1851 get[f] = n
1852 else:
1852 else:
1853 if force or p2 == pa: # going backwards?
1853 if force or p2 == pa: # going backwards?
1854 self.ui.debug(_("local deleted %s, recreating\n") % f)
1854 self.ui.debug(_("local deleted %s, recreating\n") % f)
1855 get[f] = n
1855 get[f] = n
1856 else:
1856 else:
1857 self.ui.debug(_("local deleted %s\n") % f)
1857 self.ui.debug(_("local deleted %s\n") % f)
1858
1858
1859 del mw, m1, m2, ma
1859 del mw, m1, m2, ma
1860
1860
1861 if force:
1861 if force:
1862 for f in merge:
1862 for f in merge:
1863 get[f] = merge[f][1]
1863 get[f] = merge[f][1]
1864 merge = {}
1864 merge = {}
1865
1865
1866 if linear_path or force:
1866 if linear_path or force:
1867 # we don't need to do any magic, just jump to the new rev
1867 # we don't need to do any magic, just jump to the new rev
1868 branch_merge = False
1868 branch_merge = False
1869 p1, p2 = p2, nullid
1869 p1, p2 = p2, nullid
1870 else:
1870 else:
1871 if not allow:
1871 if not allow:
1872 self.ui.status(_("this update spans a branch"
1872 self.ui.status(_("this update spans a branch"
1873 " affecting the following files:\n"))
1873 " affecting the following files:\n"))
1874 fl = merge.keys() + get.keys()
1874 fl = merge.keys() + get.keys()
1875 fl.sort()
1875 fl.sort()
1876 for f in fl:
1876 for f in fl:
1877 cf = ""
1877 cf = ""
1878 if f in merge:
1878 if f in merge:
1879 cf = _(" (resolve)")
1879 cf = _(" (resolve)")
1880 self.ui.status(" %s%s\n" % (f, cf))
1880 self.ui.status(" %s%s\n" % (f, cf))
1881 self.ui.warn(_("aborting update spanning branches!\n"))
1881 self.ui.warn(_("aborting update spanning branches!\n"))
1882 self.ui.status(_("(use 'hg merge' to merge across branches"
1882 self.ui.status(_("(use 'hg merge' to merge across branches"
1883 " or 'hg update -C' to lose changes)\n"))
1883 " or 'hg update -C' to lose changes)\n"))
1884 return 1
1884 return 1
1885 branch_merge = True
1885 branch_merge = True
1886
1886
1887 xp1 = hex(p1)
1887 xp1 = hex(p1)
1888 xp2 = hex(p2)
1888 xp2 = hex(p2)
1889 if p2 == nullid: xxp2 = ''
1889 if p2 == nullid: xxp2 = ''
1890 else: xxp2 = xp2
1890 else: xxp2 = xp2
1891
1891
1892 self.hook('preupdate', throw=True, parent1=xp1, parent2=xxp2)
1892 self.hook('preupdate', throw=True, parent1=xp1, parent2=xxp2)
1893
1893
1894 # get the files we don't need to change
1894 # get the files we don't need to change
1895 files = get.keys()
1895 files = get.keys()
1896 files.sort()
1896 files.sort()
1897 for f in files:
1897 for f in files:
1898 if f[0] == "/":
1898 if f[0] == "/":
1899 continue
1899 continue
1900 self.ui.note(_("getting %s\n") % f)
1900 self.ui.note(_("getting %s\n") % f)
1901 t = self.file(f).read(get[f])
1901 t = self.file(f).read(get[f])
1902 self.wwrite(f, t)
1902 self.wwrite(f, t)
1903 util.set_exec(self.wjoin(f), mf2[f])
1903 util.set_exec(self.wjoin(f), mf2[f])
1904 if moddirstate:
1904 if moddirstate:
1905 if branch_merge:
1905 if branch_merge:
1906 self.dirstate.update([f], 'n', st_mtime=-1)
1906 self.dirstate.update([f], 'n', st_mtime=-1)
1907 else:
1907 else:
1908 self.dirstate.update([f], 'n')
1908 self.dirstate.update([f], 'n')
1909
1909
1910 # merge the tricky bits
1910 # merge the tricky bits
1911 failedmerge = []
1911 failedmerge = []
1912 files = merge.keys()
1912 files = merge.keys()
1913 files.sort()
1913 files.sort()
1914 for f in files:
1914 for f in files:
1915 self.ui.status(_("merging %s\n") % f)
1915 self.ui.status(_("merging %s\n") % f)
1916 my, other, flag = merge[f]
1916 my, other, flag = merge[f]
1917 ret = self.merge3(f, my, other, xp1, xp2)
1917 ret = self.merge3(f, my, other, xp1, xp2)
1918 if ret:
1918 if ret:
1919 err = True
1919 err = True
1920 failedmerge.append(f)
1920 failedmerge.append(f)
1921 util.set_exec(self.wjoin(f), flag)
1921 util.set_exec(self.wjoin(f), flag)
1922 if moddirstate:
1922 if moddirstate:
1923 if branch_merge:
1923 if branch_merge:
1924 # We've done a branch merge, mark this file as merged
1924 # We've done a branch merge, mark this file as merged
1925 # so that we properly record the merger later
1925 # so that we properly record the merger later
1926 self.dirstate.update([f], 'm')
1926 self.dirstate.update([f], 'm')
1927 else:
1927 else:
1928 # We've update-merged a locally modified file, so
1928 # We've update-merged a locally modified file, so
1929 # we set the dirstate to emulate a normal checkout
1929 # we set the dirstate to emulate a normal checkout
1930 # of that file some time in the past. Thus our
1930 # of that file some time in the past. Thus our
1931 # merge will appear as a normal local file
1931 # merge will appear as a normal local file
1932 # modification.
1932 # modification.
1933 f_len = len(self.file(f).read(other))
1933 f_len = len(self.file(f).read(other))
1934 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1934 self.dirstate.update([f], 'n', st_size=f_len, st_mtime=-1)
1935
1935
1936 remove.sort()
1936 remove.sort()
1937 for f in remove:
1937 for f in remove:
1938 self.ui.note(_("removing %s\n") % f)
1938 self.ui.note(_("removing %s\n") % f)
1939 util.audit_path(f)
1939 util.audit_path(f)
1940 try:
1940 try:
1941 util.unlink(self.wjoin(f))
1941 util.unlink(self.wjoin(f))
1942 except OSError, inst:
1942 except OSError, inst:
1943 if inst.errno != errno.ENOENT:
1943 if inst.errno != errno.ENOENT:
1944 self.ui.warn(_("update failed to remove %s: %s!\n") %
1944 self.ui.warn(_("update failed to remove %s: %s!\n") %
1945 (f, inst.strerror))
1945 (f, inst.strerror))
1946 if moddirstate:
1946 if moddirstate:
1947 if branch_merge:
1947 if branch_merge:
1948 self.dirstate.update(remove, 'r')
1948 self.dirstate.update(remove, 'r')
1949 else:
1949 else:
1950 self.dirstate.forget(remove)
1950 self.dirstate.forget(remove)
1951
1951
1952 if moddirstate:
1952 if moddirstate:
1953 self.dirstate.setparents(p1, p2)
1953 self.dirstate.setparents(p1, p2)
1954
1954
1955 if show_stats:
1955 if show_stats:
1956 stats = ((len(get), _("updated")),
1956 stats = ((len(get), _("updated")),
1957 (len(merge) - len(failedmerge), _("merged")),
1957 (len(merge) - len(failedmerge), _("merged")),
1958 (len(remove), _("removed")),
1958 (len(remove), _("removed")),
1959 (len(failedmerge), _("unresolved")))
1959 (len(failedmerge), _("unresolved")))
1960 note = ", ".join([_("%d files %s") % s for s in stats])
1960 note = ", ".join([_("%d files %s") % s for s in stats])
1961 self.ui.status("%s\n" % note)
1961 self.ui.status("%s\n" % note)
1962 if moddirstate:
1962 if moddirstate:
1963 if branch_merge:
1963 if branch_merge:
1964 if failedmerge:
1964 if failedmerge:
1965 self.ui.status(_("There are unresolved merges,"
1965 self.ui.status(_("There are unresolved merges,"
1966 " you can redo the full merge using:\n"
1966 " you can redo the full merge using:\n"
1967 " hg update -C %s\n"
1967 " hg update -C %s\n"
1968 " hg merge %s\n"
1968 " hg merge %s\n"
1969 % (self.changelog.rev(p1),
1969 % (self.changelog.rev(p1),
1970 self.changelog.rev(p2))))
1970 self.changelog.rev(p2))))
1971 else:
1971 else:
1972 self.ui.status(_("(branch merge, don't forget to commit)\n"))
1972 self.ui.status(_("(branch merge, don't forget to commit)\n"))
1973 elif failedmerge:
1973 elif failedmerge:
1974 self.ui.status(_("There are unresolved merges with"
1974 self.ui.status(_("There are unresolved merges with"
1975 " locally modified files.\n"))
1975 " locally modified files.\n"))
1976
1976
1977 self.hook('update', parent1=xp1, parent2=xxp2, error=int(err))
1977 self.hook('update', parent1=xp1, parent2=xxp2, error=int(err))
1978 return err
1978 return err
1979
1979
1980 def merge3(self, fn, my, other, p1, p2):
1980 def merge3(self, fn, my, other, p1, p2):
1981 """perform a 3-way merge in the working directory"""
1981 """perform a 3-way merge in the working directory"""
1982
1982
1983 def temp(prefix, node):
1983 def temp(prefix, node):
1984 pre = "%s~%s." % (os.path.basename(fn), prefix)
1984 pre = "%s~%s." % (os.path.basename(fn), prefix)
1985 (fd, name) = tempfile.mkstemp(prefix=pre)
1985 (fd, name) = tempfile.mkstemp(prefix=pre)
1986 f = os.fdopen(fd, "wb")
1986 f = os.fdopen(fd, "wb")
1987 self.wwrite(fn, fl.read(node), f)
1987 self.wwrite(fn, fl.read(node), f)
1988 f.close()
1988 f.close()
1989 return name
1989 return name
1990
1990
1991 fl = self.file(fn)
1991 fl = self.file(fn)
1992 base = fl.ancestor(my, other)
1992 base = fl.ancestor(my, other)
1993 a = self.wjoin(fn)
1993 a = self.wjoin(fn)
1994 b = temp("base", base)
1994 b = temp("base", base)
1995 c = temp("other", other)
1995 c = temp("other", other)
1996
1996
1997 self.ui.note(_("resolving %s\n") % fn)
1997 self.ui.note(_("resolving %s\n") % fn)
1998 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1998 self.ui.debug(_("file %s: my %s other %s ancestor %s\n") %
1999 (fn, short(my), short(other), short(base)))
1999 (fn, short(my), short(other), short(base)))
2000
2000
2001 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
2001 cmd = (os.environ.get("HGMERGE") or self.ui.config("ui", "merge")
2002 or "hgmerge")
2002 or "hgmerge")
2003 r = util.system('%s "%s" "%s" "%s"' % (cmd, a, b, c), cwd=self.root,
2003 r = util.system('%s "%s" "%s" "%s"' % (cmd, a, b, c), cwd=self.root,
2004 environ={'HG_FILE': fn,
2004 environ={'HG_FILE': fn,
2005 'HG_MY_NODE': p1,
2005 'HG_MY_NODE': p1,
2006 'HG_OTHER_NODE': p2,
2006 'HG_OTHER_NODE': p2,
2007 'HG_FILE_MY_NODE': hex(my),
2007 'HG_FILE_MY_NODE': hex(my),
2008 'HG_FILE_OTHER_NODE': hex(other),
2008 'HG_FILE_OTHER_NODE': hex(other),
2009 'HG_FILE_BASE_NODE': hex(base)})
2009 'HG_FILE_BASE_NODE': hex(base)})
2010 if r:
2010 if r:
2011 self.ui.warn(_("merging %s failed!\n") % fn)
2011 self.ui.warn(_("merging %s failed!\n") % fn)
2012
2012
2013 os.unlink(b)
2013 os.unlink(b)
2014 os.unlink(c)
2014 os.unlink(c)
2015 return r
2015 return r
2016
2016
2017 def verify(self):
2017 def verify(self):
2018 filelinkrevs = {}
2018 filelinkrevs = {}
2019 filenodes = {}
2019 filenodes = {}
2020 changesets = revisions = files = 0
2020 changesets = revisions = files = 0
2021 errors = [0]
2021 errors = [0]
2022 warnings = [0]
2022 warnings = [0]
2023 neededmanifests = {}
2023 neededmanifests = {}
2024
2024
2025 def err(msg):
2025 def err(msg):
2026 self.ui.warn(msg + "\n")
2026 self.ui.warn(msg + "\n")
2027 errors[0] += 1
2027 errors[0] += 1
2028
2028
2029 def warn(msg):
2029 def warn(msg):
2030 self.ui.warn(msg + "\n")
2030 self.ui.warn(msg + "\n")
2031 warnings[0] += 1
2031 warnings[0] += 1
2032
2032
2033 def checksize(obj, name):
2033 def checksize(obj, name):
2034 d = obj.checksize()
2034 d = obj.checksize()
2035 if d[0]:
2035 if d[0]:
2036 err(_("%s data length off by %d bytes") % (name, d[0]))
2036 err(_("%s data length off by %d bytes") % (name, d[0]))
2037 if d[1]:
2037 if d[1]:
2038 err(_("%s index contains %d extra bytes") % (name, d[1]))
2038 err(_("%s index contains %d extra bytes") % (name, d[1]))
2039
2039
2040 def checkversion(obj, name):
2040 def checkversion(obj, name):
2041 if obj.version != revlog.REVLOGV0:
2041 if obj.version != revlog.REVLOGV0:
2042 if not revlogv1:
2042 if not revlogv1:
2043 warn(_("warning: `%s' uses revlog format 1") % name)
2043 warn(_("warning: `%s' uses revlog format 1") % name)
2044 elif revlogv1:
2044 elif revlogv1:
2045 warn(_("warning: `%s' uses revlog format 0") % name)
2045 warn(_("warning: `%s' uses revlog format 0") % name)
2046
2046
2047 revlogv1 = self.revlogversion != revlog.REVLOGV0
2047 revlogv1 = self.revlogversion != revlog.REVLOGV0
2048 if self.ui.verbose or revlogv1 != self.revlogv1:
2048 if self.ui.verbose or revlogv1 != self.revlogv1:
2049 self.ui.status(_("repository uses revlog format %d\n") %
2049 self.ui.status(_("repository uses revlog format %d\n") %
2050 (revlogv1 and 1 or 0))
2050 (revlogv1 and 1 or 0))
2051
2051
2052 seen = {}
2052 seen = {}
2053 self.ui.status(_("checking changesets\n"))
2053 self.ui.status(_("checking changesets\n"))
2054 checksize(self.changelog, "changelog")
2054 checksize(self.changelog, "changelog")
2055
2055
2056 for i in range(self.changelog.count()):
2056 for i in range(self.changelog.count()):
2057 changesets += 1
2057 changesets += 1
2058 n = self.changelog.node(i)
2058 n = self.changelog.node(i)
2059 l = self.changelog.linkrev(n)
2059 l = self.changelog.linkrev(n)
2060 if l != i:
2060 if l != i:
2061 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
2061 err(_("incorrect link (%d) for changeset revision %d") %(l, i))
2062 if n in seen:
2062 if n in seen:
2063 err(_("duplicate changeset at revision %d") % i)
2063 err(_("duplicate changeset at revision %d") % i)
2064 seen[n] = 1
2064 seen[n] = 1
2065
2065
2066 for p in self.changelog.parents(n):
2066 for p in self.changelog.parents(n):
2067 if p not in self.changelog.nodemap:
2067 if p not in self.changelog.nodemap:
2068 err(_("changeset %s has unknown parent %s") %
2068 err(_("changeset %s has unknown parent %s") %
2069 (short(n), short(p)))
2069 (short(n), short(p)))
2070 try:
2070 try:
2071 changes = self.changelog.read(n)
2071 changes = self.changelog.read(n)
2072 except KeyboardInterrupt:
2072 except KeyboardInterrupt:
2073 self.ui.warn(_("interrupted"))
2073 self.ui.warn(_("interrupted"))
2074 raise
2074 raise
2075 except Exception, inst:
2075 except Exception, inst:
2076 err(_("unpacking changeset %s: %s") % (short(n), inst))
2076 err(_("unpacking changeset %s: %s") % (short(n), inst))
2077 continue
2077 continue
2078
2078
2079 neededmanifests[changes[0]] = n
2079 neededmanifests[changes[0]] = n
2080
2080
2081 for f in changes[3]:
2081 for f in changes[3]:
2082 filelinkrevs.setdefault(f, []).append(i)
2082 filelinkrevs.setdefault(f, []).append(i)
2083
2083
2084 seen = {}
2084 seen = {}
2085 self.ui.status(_("checking manifests\n"))
2085 self.ui.status(_("checking manifests\n"))
2086 checkversion(self.manifest, "manifest")
2086 checkversion(self.manifest, "manifest")
2087 checksize(self.manifest, "manifest")
2087 checksize(self.manifest, "manifest")
2088
2088
2089 for i in range(self.manifest.count()):
2089 for i in range(self.manifest.count()):
2090 n = self.manifest.node(i)
2090 n = self.manifest.node(i)
2091 l = self.manifest.linkrev(n)
2091 l = self.manifest.linkrev(n)
2092
2092
2093 if l < 0 or l >= self.changelog.count():
2093 if l < 0 or l >= self.changelog.count():
2094 err(_("bad manifest link (%d) at revision %d") % (l, i))
2094 err(_("bad manifest link (%d) at revision %d") % (l, i))
2095
2095
2096 if n in neededmanifests:
2096 if n in neededmanifests:
2097 del neededmanifests[n]
2097 del neededmanifests[n]
2098
2098
2099 if n in seen:
2099 if n in seen:
2100 err(_("duplicate manifest at revision %d") % i)
2100 err(_("duplicate manifest at revision %d") % i)
2101
2101
2102 seen[n] = 1
2102 seen[n] = 1
2103
2103
2104 for p in self.manifest.parents(n):
2104 for p in self.manifest.parents(n):
2105 if p not in self.manifest.nodemap:
2105 if p not in self.manifest.nodemap:
2106 err(_("manifest %s has unknown parent %s") %
2106 err(_("manifest %s has unknown parent %s") %
2107 (short(n), short(p)))
2107 (short(n), short(p)))
2108
2108
2109 try:
2109 try:
2110 delta = mdiff.patchtext(self.manifest.delta(n))
2110 delta = mdiff.patchtext(self.manifest.delta(n))
2111 except KeyboardInterrupt:
2111 except KeyboardInterrupt:
2112 self.ui.warn(_("interrupted"))
2112 self.ui.warn(_("interrupted"))
2113 raise
2113 raise
2114 except Exception, inst:
2114 except Exception, inst:
2115 err(_("unpacking manifest %s: %s") % (short(n), inst))
2115 err(_("unpacking manifest %s: %s") % (short(n), inst))
2116 continue
2116 continue
2117
2117
2118 try:
2118 try:
2119 ff = [ l.split('\0') for l in delta.splitlines() ]
2119 ff = [ l.split('\0') for l in delta.splitlines() ]
2120 for f, fn in ff:
2120 for f, fn in ff:
2121 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
2121 filenodes.setdefault(f, {})[bin(fn[:40])] = 1
2122 except (ValueError, TypeError), inst:
2122 except (ValueError, TypeError), inst:
2123 err(_("broken delta in manifest %s: %s") % (short(n), inst))
2123 err(_("broken delta in manifest %s: %s") % (short(n), inst))
2124
2124
2125 self.ui.status(_("crosschecking files in changesets and manifests\n"))
2125 self.ui.status(_("crosschecking files in changesets and manifests\n"))
2126
2126
2127 for m, c in neededmanifests.items():
2127 for m, c in neededmanifests.items():
2128 err(_("Changeset %s refers to unknown manifest %s") %
2128 err(_("Changeset %s refers to unknown manifest %s") %
2129 (short(m), short(c)))
2129 (short(m), short(c)))
2130 del neededmanifests
2130 del neededmanifests
2131
2131
2132 for f in filenodes:
2132 for f in filenodes:
2133 if f not in filelinkrevs:
2133 if f not in filelinkrevs:
2134 err(_("file %s in manifest but not in changesets") % f)
2134 err(_("file %s in manifest but not in changesets") % f)
2135
2135
2136 for f in filelinkrevs:
2136 for f in filelinkrevs:
2137 if f not in filenodes:
2137 if f not in filenodes:
2138 err(_("file %s in changeset but not in manifest") % f)
2138 err(_("file %s in changeset but not in manifest") % f)
2139
2139
2140 self.ui.status(_("checking files\n"))
2140 self.ui.status(_("checking files\n"))
2141 ff = filenodes.keys()
2141 ff = filenodes.keys()
2142 ff.sort()
2142 ff.sort()
2143 for f in ff:
2143 for f in ff:
2144 if f == "/dev/null":
2144 if f == "/dev/null":
2145 continue
2145 continue
2146 files += 1
2146 files += 1
2147 if not f:
2147 if not f:
2148 err(_("file without name in manifest %s") % short(n))
2148 err(_("file without name in manifest %s") % short(n))
2149 continue
2149 continue
2150 fl = self.file(f)
2150 fl = self.file(f)
2151 checkversion(fl, f)
2151 checkversion(fl, f)
2152 checksize(fl, f)
2152 checksize(fl, f)
2153
2153
2154 nodes = {nullid: 1}
2154 nodes = {nullid: 1}
2155 seen = {}
2155 seen = {}
2156 for i in range(fl.count()):
2156 for i in range(fl.count()):
2157 revisions += 1
2157 revisions += 1
2158 n = fl.node(i)
2158 n = fl.node(i)
2159
2159
2160 if n in seen:
2160 if n in seen:
2161 err(_("%s: duplicate revision %d") % (f, i))
2161 err(_("%s: duplicate revision %d") % (f, i))
2162 if n not in filenodes[f]:
2162 if n not in filenodes[f]:
2163 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
2163 err(_("%s: %d:%s not in manifests") % (f, i, short(n)))
2164 else:
2164 else:
2165 del filenodes[f][n]
2165 del filenodes[f][n]
2166
2166
2167 flr = fl.linkrev(n)
2167 flr = fl.linkrev(n)
2168 if flr not in filelinkrevs.get(f, []):
2168 if flr not in filelinkrevs.get(f, []):
2169 err(_("%s:%s points to unexpected changeset %d")
2169 err(_("%s:%s points to unexpected changeset %d")
2170 % (f, short(n), flr))
2170 % (f, short(n), flr))
2171 else:
2171 else:
2172 filelinkrevs[f].remove(flr)
2172 filelinkrevs[f].remove(flr)
2173
2173
2174 # verify contents
2174 # verify contents
2175 try:
2175 try:
2176 t = fl.read(n)
2176 t = fl.read(n)
2177 except KeyboardInterrupt:
2177 except KeyboardInterrupt:
2178 self.ui.warn(_("interrupted"))
2178 self.ui.warn(_("interrupted"))
2179 raise
2179 raise
2180 except Exception, inst:
2180 except Exception, inst:
2181 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
2181 err(_("unpacking file %s %s: %s") % (f, short(n), inst))
2182
2182
2183 # verify parents
2183 # verify parents
2184 (p1, p2) = fl.parents(n)
2184 (p1, p2) = fl.parents(n)
2185 if p1 not in nodes:
2185 if p1 not in nodes:
2186 err(_("file %s:%s unknown parent 1 %s") %
2186 err(_("file %s:%s unknown parent 1 %s") %
2187 (f, short(n), short(p1)))
2187 (f, short(n), short(p1)))
2188 if p2 not in nodes:
2188 if p2 not in nodes:
2189 err(_("file %s:%s unknown parent 2 %s") %
2189 err(_("file %s:%s unknown parent 2 %s") %
2190 (f, short(n), short(p1)))
2190 (f, short(n), short(p1)))
2191 nodes[n] = 1
2191 nodes[n] = 1
2192
2192
2193 # cross-check
2193 # cross-check
2194 for node in filenodes[f]:
2194 for node in filenodes[f]:
2195 err(_("node %s in manifests not in %s") % (hex(node), f))
2195 err(_("node %s in manifests not in %s") % (hex(node), f))
2196
2196
2197 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
2197 self.ui.status(_("%d files, %d changesets, %d total revisions\n") %
2198 (files, changesets, revisions))
2198 (files, changesets, revisions))
2199
2199
2200 if warnings[0]:
2200 if warnings[0]:
2201 self.ui.warn(_("%d warnings encountered!\n") % warnings[0])
2201 self.ui.warn(_("%d warnings encountered!\n") % warnings[0])
2202 if errors[0]:
2202 if errors[0]:
2203 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
2203 self.ui.warn(_("%d integrity errors encountered!\n") % errors[0])
2204 return 1
2204 return 1
2205
2205
2206 def stream_in(self, remote):
2206 def stream_in(self, remote):
2207 fp = remote.stream_out()
2208 resp = int(fp.readline())
2209 if resp != 0:
2210 raise util.Abort(_('operation forbidden by server'))
2207 self.ui.status(_('streaming all changes\n'))
2211 self.ui.status(_('streaming all changes\n'))
2208 fp = remote.stream_out()
2209 total_files, total_bytes = map(int, fp.readline().split(' ', 1))
2212 total_files, total_bytes = map(int, fp.readline().split(' ', 1))
2210 self.ui.status(_('%d files to transfer, %s of data\n') %
2213 self.ui.status(_('%d files to transfer, %s of data\n') %
2211 (total_files, util.bytecount(total_bytes)))
2214 (total_files, util.bytecount(total_bytes)))
2212 start = time.time()
2215 start = time.time()
2213 for i in xrange(total_files):
2216 for i in xrange(total_files):
2214 name, size = fp.readline().split('\0', 1)
2217 name, size = fp.readline().split('\0', 1)
2215 size = int(size)
2218 size = int(size)
2216 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
2219 self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size)))
2217 ofp = self.opener(name, 'w')
2220 ofp = self.opener(name, 'w')
2218 for chunk in util.filechunkiter(fp, limit=size):
2221 for chunk in util.filechunkiter(fp, limit=size):
2219 ofp.write(chunk)
2222 ofp.write(chunk)
2220 ofp.close()
2223 ofp.close()
2221 elapsed = time.time() - start
2224 elapsed = time.time() - start
2222 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
2225 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
2223 (util.bytecount(total_bytes), elapsed,
2226 (util.bytecount(total_bytes), elapsed,
2224 util.bytecount(total_bytes / elapsed)))
2227 util.bytecount(total_bytes / elapsed)))
2225 self.reload()
2228 self.reload()
2226 return len(self.heads()) + 1
2229 return len(self.heads()) + 1
2227
2230
2228 def clone(self, remote, heads=[], stream=False):
2231 def clone(self, remote, heads=[], stream=False):
2229 '''clone remote repository.
2232 '''clone remote repository.
2230
2233
2231 keyword arguments:
2234 keyword arguments:
2232 heads: list of revs to clone (forces use of pull)
2235 heads: list of revs to clone (forces use of pull)
2233 pull: force use of pull, even if remote can stream'''
2236 stream: use streaming clone if possible'''
2234
2237
2235 # now, all clients that can stream can read repo formats
2238 # now, all clients that can request uncompressed clones can
2236 # supported by all servers that can stream.
2239 # read repo formats supported by all servers that can serve
2240 # them.
2237
2241
2238 # if revlog format changes, client will have to check version
2242 # if revlog format changes, client will have to check version
2239 # and format flags on "stream" capability, and stream only if
2243 # and format flags on "stream" capability, and use
2240 # compatible.
2244 # uncompressed only if compatible.
2241
2245
2242 if stream and not heads and remote.capable('stream'):
2246 if stream and not heads and remote.capable('stream'):
2243 return self.stream_in(remote)
2247 return self.stream_in(remote)
2244 return self.pull(remote, heads)
2248 return self.pull(remote, heads)
2245
2249
2246 # used to avoid circular references so destructors work
2250 # used to avoid circular references so destructors work
2247 def aftertrans(base):
2251 def aftertrans(base):
2248 p = base
2252 p = base
2249 def a():
2253 def a():
2250 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
2254 util.rename(os.path.join(p, "journal"), os.path.join(p, "undo"))
2251 util.rename(os.path.join(p, "journal.dirstate"),
2255 util.rename(os.path.join(p, "journal.dirstate"),
2252 os.path.join(p, "undo.dirstate"))
2256 os.path.join(p, "undo.dirstate"))
2253 return a
2257 return a
2254
2258
@@ -1,171 +1,173 b''
1 # sshserver.py - ssh protocol server support for mercurial
1 # sshserver.py - ssh protocol server support for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from demandload import demandload
8 from demandload import demandload
9 from i18n import gettext as _
9 from i18n import gettext as _
10 from node import *
10 from node import *
11 demandload(globals(), "os streamclone sys tempfile util")
11 demandload(globals(), "os streamclone sys tempfile util")
12
12
13 class sshserver(object):
13 class sshserver(object):
14 def __init__(self, ui, repo):
14 def __init__(self, ui, repo):
15 self.ui = ui
15 self.ui = ui
16 self.repo = repo
16 self.repo = repo
17 self.lock = None
17 self.lock = None
18 self.fin = sys.stdin
18 self.fin = sys.stdin
19 self.fout = sys.stdout
19 self.fout = sys.stdout
20
20
21 sys.stdout = sys.stderr
21 sys.stdout = sys.stderr
22
22
23 # Prevent insertion/deletion of CRs
23 # Prevent insertion/deletion of CRs
24 util.set_binary(self.fin)
24 util.set_binary(self.fin)
25 util.set_binary(self.fout)
25 util.set_binary(self.fout)
26
26
27 def getarg(self):
27 def getarg(self):
28 argline = self.fin.readline()[:-1]
28 argline = self.fin.readline()[:-1]
29 arg, l = argline.split()
29 arg, l = argline.split()
30 val = self.fin.read(int(l))
30 val = self.fin.read(int(l))
31 return arg, val
31 return arg, val
32
32
33 def respond(self, v):
33 def respond(self, v):
34 self.fout.write("%d\n" % len(v))
34 self.fout.write("%d\n" % len(v))
35 self.fout.write(v)
35 self.fout.write(v)
36 self.fout.flush()
36 self.fout.flush()
37
37
38 def serve_forever(self):
38 def serve_forever(self):
39 while self.serve_one(): pass
39 while self.serve_one(): pass
40 sys.exit(0)
40 sys.exit(0)
41
41
42 def serve_one(self):
42 def serve_one(self):
43 cmd = self.fin.readline()[:-1]
43 cmd = self.fin.readline()[:-1]
44 if cmd:
44 if cmd:
45 impl = getattr(self, 'do_' + cmd, None)
45 impl = getattr(self, 'do_' + cmd, None)
46 if impl: impl()
46 if impl: impl()
47 else: self.respond("")
47 else: self.respond("")
48 return cmd != ''
48 return cmd != ''
49
49
50 def do_heads(self):
50 def do_heads(self):
51 h = self.repo.heads()
51 h = self.repo.heads()
52 self.respond(" ".join(map(hex, h)) + "\n")
52 self.respond(" ".join(map(hex, h)) + "\n")
53
53
54 def do_hello(self):
54 def do_hello(self):
55 '''the hello command returns a set of lines describing various
55 '''the hello command returns a set of lines describing various
56 interesting things about the server, in an RFC822-like format.
56 interesting things about the server, in an RFC822-like format.
57 Currently the only one defined is "capabilities", which
57 Currently the only one defined is "capabilities", which
58 consists of a line in the form:
58 consists of a line in the form:
59
59
60 capabilities: space separated list of tokens
60 capabilities: space separated list of tokens
61 '''
61 '''
62
62
63 r = "capabilities: unbundle stream=%d\n" % (self.repo.revlogversion,)
63 caps = ['unbundle']
64 self.respond(r)
64 if self.ui.configbool('server', 'uncompressed'):
65 caps.append('stream=%d' % self.repo.revlogversion)
66 self.respond("capabilities: %s\n" % (' '.join(caps),))
65
67
66 def do_lock(self):
68 def do_lock(self):
67 '''DEPRECATED - allowing remote client to lock repo is not safe'''
69 '''DEPRECATED - allowing remote client to lock repo is not safe'''
68
70
69 self.lock = self.repo.lock()
71 self.lock = self.repo.lock()
70 self.respond("")
72 self.respond("")
71
73
72 def do_unlock(self):
74 def do_unlock(self):
73 '''DEPRECATED'''
75 '''DEPRECATED'''
74
76
75 if self.lock:
77 if self.lock:
76 self.lock.release()
78 self.lock.release()
77 self.lock = None
79 self.lock = None
78 self.respond("")
80 self.respond("")
79
81
80 def do_branches(self):
82 def do_branches(self):
81 arg, nodes = self.getarg()
83 arg, nodes = self.getarg()
82 nodes = map(bin, nodes.split(" "))
84 nodes = map(bin, nodes.split(" "))
83 r = []
85 r = []
84 for b in self.repo.branches(nodes):
86 for b in self.repo.branches(nodes):
85 r.append(" ".join(map(hex, b)) + "\n")
87 r.append(" ".join(map(hex, b)) + "\n")
86 self.respond("".join(r))
88 self.respond("".join(r))
87
89
88 def do_between(self):
90 def do_between(self):
89 arg, pairs = self.getarg()
91 arg, pairs = self.getarg()
90 pairs = [map(bin, p.split("-")) for p in pairs.split(" ")]
92 pairs = [map(bin, p.split("-")) for p in pairs.split(" ")]
91 r = []
93 r = []
92 for b in self.repo.between(pairs):
94 for b in self.repo.between(pairs):
93 r.append(" ".join(map(hex, b)) + "\n")
95 r.append(" ".join(map(hex, b)) + "\n")
94 self.respond("".join(r))
96 self.respond("".join(r))
95
97
96 def do_changegroup(self):
98 def do_changegroup(self):
97 nodes = []
99 nodes = []
98 arg, roots = self.getarg()
100 arg, roots = self.getarg()
99 nodes = map(bin, roots.split(" "))
101 nodes = map(bin, roots.split(" "))
100
102
101 cg = self.repo.changegroup(nodes, 'serve')
103 cg = self.repo.changegroup(nodes, 'serve')
102 while True:
104 while True:
103 d = cg.read(4096)
105 d = cg.read(4096)
104 if not d:
106 if not d:
105 break
107 break
106 self.fout.write(d)
108 self.fout.write(d)
107
109
108 self.fout.flush()
110 self.fout.flush()
109
111
110 def do_addchangegroup(self):
112 def do_addchangegroup(self):
111 '''DEPRECATED'''
113 '''DEPRECATED'''
112
114
113 if not self.lock:
115 if not self.lock:
114 self.respond("not locked")
116 self.respond("not locked")
115 return
117 return
116
118
117 self.respond("")
119 self.respond("")
118 r = self.repo.addchangegroup(self.fin, 'serve')
120 r = self.repo.addchangegroup(self.fin, 'serve')
119 self.respond(str(r))
121 self.respond(str(r))
120
122
121 def do_unbundle(self):
123 def do_unbundle(self):
122 their_heads = self.getarg()[1].split()
124 their_heads = self.getarg()[1].split()
123
125
124 def check_heads():
126 def check_heads():
125 heads = map(hex, self.repo.heads())
127 heads = map(hex, self.repo.heads())
126 return their_heads == [hex('force')] or their_heads == heads
128 return their_heads == [hex('force')] or their_heads == heads
127
129
128 # fail early if possible
130 # fail early if possible
129 if not check_heads():
131 if not check_heads():
130 self.respond(_('unsynced changes'))
132 self.respond(_('unsynced changes'))
131 return
133 return
132
134
133 self.respond('')
135 self.respond('')
134
136
135 # write bundle data to temporary file because it can be big
137 # write bundle data to temporary file because it can be big
136
138
137 try:
139 try:
138 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
140 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
139 fp = os.fdopen(fd, 'wb+')
141 fp = os.fdopen(fd, 'wb+')
140
142
141 count = int(self.fin.readline())
143 count = int(self.fin.readline())
142 while count:
144 while count:
143 fp.write(self.fin.read(count))
145 fp.write(self.fin.read(count))
144 count = int(self.fin.readline())
146 count = int(self.fin.readline())
145
147
146 was_locked = self.lock is not None
148 was_locked = self.lock is not None
147 if not was_locked:
149 if not was_locked:
148 self.lock = self.repo.lock()
150 self.lock = self.repo.lock()
149 try:
151 try:
150 if not check_heads():
152 if not check_heads():
151 # someone else committed/pushed/unbundled while we
153 # someone else committed/pushed/unbundled while we
152 # were transferring data
154 # were transferring data
153 self.respond(_('unsynced changes'))
155 self.respond(_('unsynced changes'))
154 return
156 return
155 self.respond('')
157 self.respond('')
156
158
157 # push can proceed
159 # push can proceed
158
160
159 fp.seek(0)
161 fp.seek(0)
160 r = self.repo.addchangegroup(fp, 'serve')
162 r = self.repo.addchangegroup(fp, 'serve')
161 self.respond(str(r))
163 self.respond(str(r))
162 finally:
164 finally:
163 if not was_locked:
165 if not was_locked:
164 self.lock.release()
166 self.lock.release()
165 self.lock = None
167 self.lock = None
166 finally:
168 finally:
167 fp.close()
169 fp.close()
168 os.unlink(tempname)
170 os.unlink(tempname)
169
171
170 def do_stream_out(self):
172 def do_stream_out(self):
171 streamclone.stream_out(self.repo, self.fout)
173 streamclone.stream_out(self.repo, self.fout)
@@ -1,82 +1,90 b''
1 # streamclone.py - streaming clone server support for mercurial
1 # streamclone.py - streaming clone server support for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from demandload import demandload
8 from demandload import demandload
9 from i18n import gettext as _
9 from i18n import gettext as _
10 demandload(globals(), "os stat util")
10 demandload(globals(), "os stat util")
11
11
12 # if server supports streaming clone, it advertises "stream"
12 # if server supports streaming clone, it advertises "stream"
13 # capability with value that is version+flags of repo it is serving.
13 # capability with value that is version+flags of repo it is serving.
14 # client only streams if it can read that repo format.
14 # client only streams if it can read that repo format.
15
15
16 def walkrepo(root):
16 def walkrepo(root):
17 '''iterate over metadata files in repository.
17 '''iterate over metadata files in repository.
18 walk in natural (sorted) order.
18 walk in natural (sorted) order.
19 yields 2-tuples: name of .d or .i file, size of file.'''
19 yields 2-tuples: name of .d or .i file, size of file.'''
20
20
21 strip_count = len(root) + len(os.sep)
21 strip_count = len(root) + len(os.sep)
22 def walk(path, recurse):
22 def walk(path, recurse):
23 ents = os.listdir(path)
23 ents = os.listdir(path)
24 ents.sort()
24 ents.sort()
25 for e in ents:
25 for e in ents:
26 pe = os.path.join(path, e)
26 pe = os.path.join(path, e)
27 st = os.lstat(pe)
27 st = os.lstat(pe)
28 if stat.S_ISDIR(st.st_mode):
28 if stat.S_ISDIR(st.st_mode):
29 if recurse:
29 if recurse:
30 for x in walk(pe, True):
30 for x in walk(pe, True):
31 yield x
31 yield x
32 else:
32 else:
33 if not stat.S_ISREG(st.st_mode) or len(e) < 2:
33 if not stat.S_ISREG(st.st_mode) or len(e) < 2:
34 continue
34 continue
35 sfx = e[-2:]
35 sfx = e[-2:]
36 if sfx in ('.d', '.i'):
36 if sfx in ('.d', '.i'):
37 yield pe[strip_count:], st.st_size
37 yield pe[strip_count:], st.st_size
38 # write file data first
38 # write file data first
39 for x in walk(os.path.join(root, 'data'), True):
39 for x in walk(os.path.join(root, 'data'), True):
40 yield x
40 yield x
41 # write manifest before changelog
41 # write manifest before changelog
42 meta = list(walk(root, False))
42 meta = list(walk(root, False))
43 meta.sort(reverse=True)
43 meta.sort()
44 meta.reverse()
44 for x in meta:
45 for x in meta:
45 yield x
46 yield x
46
47
47 # stream file format is simple.
48 # stream file format is simple.
48 #
49 #
49 # server writes out line that says how many files, how many total
50 # server writes out line that says how many files, how many total
50 # bytes. separator is ascii space, byte counts are strings.
51 # bytes. separator is ascii space, byte counts are strings.
51 #
52 #
52 # then for each file:
53 # then for each file:
53 #
54 #
54 # server writes out line that says file name, how many bytes in
55 # server writes out line that says file name, how many bytes in
55 # file. separator is ascii nul, byte count is string.
56 # file. separator is ascii nul, byte count is string.
56 #
57 #
57 # server writes out raw file data.
58 # server writes out raw file data.
58
59
59 def stream_out(repo, fileobj):
60 def stream_out(repo, fileobj):
60 '''stream out all metadata files in repository.
61 '''stream out all metadata files in repository.
61 writes to file-like object, must support write() and optional flush().'''
62 writes to file-like object, must support write() and optional flush().'''
63
64 if not repo.ui.configbool('server', 'uncompressed'):
65 fileobj.write('1\n')
66 return
67
68 fileobj.write('0\n')
69
62 # get consistent snapshot of repo. lock during scan so lock not
70 # get consistent snapshot of repo. lock during scan so lock not
63 # needed while we stream, and commits can happen.
71 # needed while we stream, and commits can happen.
64 lock = repo.lock()
72 lock = repo.lock()
65 repo.ui.debug('scanning\n')
73 repo.ui.debug('scanning\n')
66 entries = []
74 entries = []
67 total_bytes = 0
75 total_bytes = 0
68 for name, size in walkrepo(repo.path):
76 for name, size in walkrepo(repo.path):
69 entries.append((name, size))
77 entries.append((name, size))
70 total_bytes += size
78 total_bytes += size
71 lock.release()
79 lock.release()
72
80
73 repo.ui.debug('%d files, %d bytes to transfer\n' %
81 repo.ui.debug('%d files, %d bytes to transfer\n' %
74 (len(entries), total_bytes))
82 (len(entries), total_bytes))
75 fileobj.write('%d %d\n' % (len(entries), total_bytes))
83 fileobj.write('%d %d\n' % (len(entries), total_bytes))
76 for name, size in entries:
84 for name, size in entries:
77 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
85 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
78 fileobj.write('%s\0%d\n' % (name, size))
86 fileobj.write('%s\0%d\n' % (name, size))
79 for chunk in util.filechunkiter(repo.opener(name), limit=size):
87 for chunk in util.filechunkiter(repo.opener(name), limit=size):
80 fileobj.write(chunk)
88 fileobj.write(chunk)
81 flush = getattr(fileobj, 'flush', None)
89 flush = getattr(fileobj, 'flush', None)
82 if flush: flush()
90 if flush: flush()
@@ -1,363 +1,363 b''
1 # ui.py - user interface bits for mercurial
1 # ui.py - user interface bits for mercurial
2 #
2 #
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from i18n import gettext as _
8 from i18n import gettext as _
9 from demandload import *
9 from demandload import *
10 demandload(globals(), "errno getpass os re smtplib socket sys tempfile")
10 demandload(globals(), "errno getpass os re smtplib socket sys tempfile")
11 demandload(globals(), "ConfigParser templater traceback util")
11 demandload(globals(), "ConfigParser templater traceback util")
12
12
13 class ui(object):
13 class ui(object):
14 def __init__(self, verbose=False, debug=False, quiet=False,
14 def __init__(self, verbose=False, debug=False, quiet=False,
15 interactive=True, traceback=False, parentui=None):
15 interactive=True, traceback=False, parentui=None):
16 self.overlay = {}
16 self.overlay = {}
17 if parentui is None:
17 if parentui is None:
18 # this is the parent of all ui children
18 # this is the parent of all ui children
19 self.parentui = None
19 self.parentui = None
20 self.cdata = ConfigParser.SafeConfigParser()
20 self.cdata = ConfigParser.SafeConfigParser()
21 self.readconfig(util.rcpath())
21 self.readconfig(util.rcpath())
22
22
23 self.quiet = self.configbool("ui", "quiet")
23 self.quiet = self.configbool("ui", "quiet")
24 self.verbose = self.configbool("ui", "verbose")
24 self.verbose = self.configbool("ui", "verbose")
25 self.debugflag = self.configbool("ui", "debug")
25 self.debugflag = self.configbool("ui", "debug")
26 self.interactive = self.configbool("ui", "interactive", True)
26 self.interactive = self.configbool("ui", "interactive", True)
27 self.traceback = traceback
27 self.traceback = traceback
28
28
29 self.updateopts(verbose, debug, quiet, interactive)
29 self.updateopts(verbose, debug, quiet, interactive)
30 self.diffcache = None
30 self.diffcache = None
31 self.header = []
31 self.header = []
32 self.prev_header = []
32 self.prev_header = []
33 self.revlogopts = self.configrevlog()
33 self.revlogopts = self.configrevlog()
34 else:
34 else:
35 # parentui may point to an ui object which is already a child
35 # parentui may point to an ui object which is already a child
36 self.parentui = parentui.parentui or parentui
36 self.parentui = parentui.parentui or parentui
37 parent_cdata = self.parentui.cdata
37 parent_cdata = self.parentui.cdata
38 self.cdata = ConfigParser.SafeConfigParser(parent_cdata.defaults())
38 self.cdata = ConfigParser.SafeConfigParser(parent_cdata.defaults())
39 # make interpolation work
39 # make interpolation work
40 for section in parent_cdata.sections():
40 for section in parent_cdata.sections():
41 self.cdata.add_section(section)
41 self.cdata.add_section(section)
42 for name, value in parent_cdata.items(section, raw=True):
42 for name, value in parent_cdata.items(section, raw=True):
43 self.cdata.set(section, name, value)
43 self.cdata.set(section, name, value)
44
44
45 def __getattr__(self, key):
45 def __getattr__(self, key):
46 return getattr(self.parentui, key)
46 return getattr(self.parentui, key)
47
47
48 def updateopts(self, verbose=False, debug=False, quiet=False,
48 def updateopts(self, verbose=False, debug=False, quiet=False,
49 interactive=True, traceback=False, config=[]):
49 interactive=True, traceback=False, config=[]):
50 self.quiet = (self.quiet or quiet) and not verbose and not debug
50 self.quiet = (self.quiet or quiet) and not verbose and not debug
51 self.verbose = (self.verbose or verbose) or debug
51 self.verbose = (self.verbose or verbose) or debug
52 self.debugflag = (self.debugflag or debug)
52 self.debugflag = (self.debugflag or debug)
53 self.interactive = (self.interactive and interactive)
53 self.interactive = (self.interactive and interactive)
54 self.traceback = self.traceback or traceback
54 self.traceback = self.traceback or traceback
55 for cfg in config:
55 for cfg in config:
56 try:
56 try:
57 name, value = cfg.split('=', 1)
57 name, value = cfg.split('=', 1)
58 section, name = name.split('.', 1)
58 section, name = name.split('.', 1)
59 if not self.cdata.has_section(section):
59 if not self.cdata.has_section(section):
60 self.cdata.add_section(section)
60 self.cdata.add_section(section)
61 if not section or not name:
61 if not section or not name:
62 raise IndexError
62 raise IndexError
63 self.cdata.set(section, name, value)
63 self.cdata.set(section, name, value)
64 except (IndexError, ValueError):
64 except (IndexError, ValueError):
65 raise util.Abort(_('malformed --config option: %s') % cfg)
65 raise util.Abort(_('malformed --config option: %s') % cfg)
66
66
67 def readconfig(self, fn, root=None):
67 def readconfig(self, fn, root=None):
68 if isinstance(fn, basestring):
68 if isinstance(fn, basestring):
69 fn = [fn]
69 fn = [fn]
70 for f in fn:
70 for f in fn:
71 try:
71 try:
72 self.cdata.read(f)
72 self.cdata.read(f)
73 except ConfigParser.ParsingError, inst:
73 except ConfigParser.ParsingError, inst:
74 raise util.Abort(_("Failed to parse %s\n%s") % (f, inst))
74 raise util.Abort(_("Failed to parse %s\n%s") % (f, inst))
75 # translate paths relative to root (or home) into absolute paths
75 # translate paths relative to root (or home) into absolute paths
76 if root is None:
76 if root is None:
77 root = os.path.expanduser('~')
77 root = os.path.expanduser('~')
78 for name, path in self.configitems("paths"):
78 for name, path in self.configitems("paths"):
79 if path and "://" not in path and not os.path.isabs(path):
79 if path and "://" not in path and not os.path.isabs(path):
80 self.cdata.set("paths", name, os.path.join(root, path))
80 self.cdata.set("paths", name, os.path.join(root, path))
81
81
82 def setconfig(self, section, name, val):
82 def setconfig(self, section, name, val):
83 self.overlay[(section, name)] = val
83 self.overlay[(section, name)] = val
84
84
85 def config(self, section, name, default=None):
85 def config(self, section, name, default=None):
86 if self.overlay.has_key((section, name)):
86 if self.overlay.has_key((section, name)):
87 return self.overlay[(section, name)]
87 return self.overlay[(section, name)]
88 if self.cdata.has_option(section, name):
88 if self.cdata.has_option(section, name):
89 try:
89 try:
90 return self.cdata.get(section, name)
90 return self.cdata.get(section, name)
91 except ConfigParser.InterpolationError, inst:
91 except ConfigParser.InterpolationError, inst:
92 raise util.Abort(_("Error in configuration:\n%s") % inst)
92 raise util.Abort(_("Error in configuration:\n%s") % inst)
93 if self.parentui is None:
93 if self.parentui is None:
94 return default
94 return default
95 else:
95 else:
96 return self.parentui.config(section, name, default)
96 return self.parentui.config(section, name, default)
97
97
98 def configlist(self, section, name, default=None):
98 def configlist(self, section, name, default=None):
99 """Return a list of comma/space separated strings"""
99 """Return a list of comma/space separated strings"""
100 result = self.config(section, name)
100 result = self.config(section, name)
101 if result is None:
101 if result is None:
102 result = default or []
102 result = default or []
103 if isinstance(result, basestring):
103 if isinstance(result, basestring):
104 result = result.replace(",", " ").split()
104 result = result.replace(",", " ").split()
105 return result
105 return result
106
106
107 def configbool(self, section, name, default=False):
107 def configbool(self, section, name, default=False):
108 if self.overlay.has_key((section, name)):
108 if self.overlay.has_key((section, name)):
109 return self.overlay[(section, name)]
109 return self.overlay[(section, name)]
110 if self.cdata.has_option(section, name):
110 if self.cdata.has_option(section, name):
111 try:
111 try:
112 return self.cdata.getboolean(section, name)
112 return self.cdata.getboolean(section, name)
113 except ConfigParser.InterpolationError, inst:
113 except ConfigParser.InterpolationError, inst:
114 raise util.Abort(_("Error in configuration:\n%s") % inst)
114 raise util.Abort(_("Error in configuration:\n%s") % inst)
115 if self.parentui is None:
115 if self.parentui is None:
116 return default
116 return default
117 else:
117 else:
118 return self.parentui.configbool(section, name, default)
118 return self.parentui.configbool(section, name, default)
119
119
120 def has_config(self, section):
120 def has_config(self, section):
121 '''tell whether section exists in config.'''
121 '''tell whether section exists in config.'''
122 return self.cdata.has_section(section)
122 return self.cdata.has_section(section)
123
123
124 def configitems(self, section):
124 def configitems(self, section):
125 items = {}
125 items = {}
126 if self.parentui is not None:
126 if self.parentui is not None:
127 items = dict(self.parentui.configitems(section))
127 items = dict(self.parentui.configitems(section))
128 if self.cdata.has_section(section):
128 if self.cdata.has_section(section):
129 try:
129 try:
130 items.update(dict(self.cdata.items(section)))
130 items.update(dict(self.cdata.items(section)))
131 except ConfigParser.InterpolationError, inst:
131 except ConfigParser.InterpolationError, inst:
132 raise util.Abort(_("Error in configuration:\n%s") % inst)
132 raise util.Abort(_("Error in configuration:\n%s") % inst)
133 x = items.items()
133 x = items.items()
134 x.sort()
134 x.sort()
135 return x
135 return x
136
136
137 def walkconfig(self, seen=None):
137 def walkconfig(self, seen=None):
138 if seen is None:
138 if seen is None:
139 seen = {}
139 seen = {}
140 for (section, name), value in self.overlay.iteritems():
140 for (section, name), value in self.overlay.iteritems():
141 yield section, name, value
141 yield section, name, value
142 seen[section, name] = 1
142 seen[section, name] = 1
143 for section in self.cdata.sections():
143 for section in self.cdata.sections():
144 for name, value in self.cdata.items(section):
144 for name, value in self.cdata.items(section):
145 if (section, name) in seen: continue
145 if (section, name) in seen: continue
146 yield section, name, value.replace('\n', '\\n')
146 yield section, name, value.replace('\n', '\\n')
147 seen[section, name] = 1
147 seen[section, name] = 1
148 if self.parentui is not None:
148 if self.parentui is not None:
149 for parent in self.parentui.walkconfig(seen):
149 for parent in self.parentui.walkconfig(seen):
150 yield parent
150 yield parent
151
151
152 def extensions(self):
152 def extensions(self):
153 result = self.configitems("extensions")
153 result = self.configitems("extensions")
154 for i, (key, value) in enumerate(result):
154 for i, (key, value) in enumerate(result):
155 if value:
155 if value:
156 result[i] = (key, os.path.expanduser(value))
156 result[i] = (key, os.path.expanduser(value))
157 return result
157 return result
158
158
159 def hgignorefiles(self):
159 def hgignorefiles(self):
160 result = []
160 result = []
161 for key, value in self.configitems("ui"):
161 for key, value in self.configitems("ui"):
162 if key == 'ignore' or key.startswith('ignore.'):
162 if key == 'ignore' or key.startswith('ignore.'):
163 result.append(os.path.expanduser(value))
163 result.append(os.path.expanduser(value))
164 return result
164 return result
165
165
166 def configrevlog(self):
166 def configrevlog(self):
167 result = {}
167 result = {}
168 for key, value in self.configitems("revlog"):
168 for key, value in self.configitems("revlog"):
169 result[key.lower()] = value
169 result[key.lower()] = value
170 return result
170 return result
171
171
172 def diffopts(self):
172 def diffopts(self):
173 if self.diffcache:
173 if self.diffcache:
174 return self.diffcache
174 return self.diffcache
175 result = {'showfunc': True, 'ignorews': False,
175 result = {'showfunc': True, 'ignorews': False,
176 'ignorewsamount': False, 'ignoreblanklines': False}
176 'ignorewsamount': False, 'ignoreblanklines': False}
177 for key, value in self.configitems("diff"):
177 for key, value in self.configitems("diff"):
178 if value:
178 if value:
179 result[key.lower()] = (value.lower() == 'true')
179 result[key.lower()] = (value.lower() == 'true')
180 self.diffcache = result
180 self.diffcache = result
181 return result
181 return result
182
182
183 def username(self):
183 def username(self):
184 """Return default username to be used in commits.
184 """Return default username to be used in commits.
185
185
186 Searched in this order: $HGUSER, [ui] section of hgrcs, $EMAIL
186 Searched in this order: $HGUSER, [ui] section of hgrcs, $EMAIL
187 and stop searching if one of these is set.
187 and stop searching if one of these is set.
188 Abort if found username is an empty string to force specifying
188 Abort if found username is an empty string to force specifying
189 the commit user elsewhere, e.g. with line option or repo hgrc.
189 the commit user elsewhere, e.g. with line option or repo hgrc.
190 If not found, use ($LOGNAME or $USER or $LNAME or
190 If not found, use ($LOGNAME or $USER or $LNAME or
191 $USERNAME) +"@full.hostname".
191 $USERNAME) +"@full.hostname".
192 """
192 """
193 user = os.environ.get("HGUSER")
193 user = os.environ.get("HGUSER")
194 if user is None:
194 if user is None:
195 user = self.config("ui", "username")
195 user = self.config("ui", "username")
196 if user is None:
196 if user is None:
197 user = os.environ.get("EMAIL")
197 user = os.environ.get("EMAIL")
198 if user is None:
198 if user is None:
199 try:
199 try:
200 user = '%s@%s' % (getpass.getuser(), socket.getfqdn())
200 user = '%s@%s' % (getpass.getuser(), socket.getfqdn())
201 except KeyError:
201 except KeyError:
202 raise util.Abort(_("Please specify a username."))
202 raise util.Abort(_("Please specify a username."))
203 return user
203 return user
204
204
205 def shortuser(self, user):
205 def shortuser(self, user):
206 """Return a short representation of a user name or email address."""
206 """Return a short representation of a user name or email address."""
207 if not self.verbose: user = util.shortuser(user)
207 if not self.verbose: user = util.shortuser(user)
208 return user
208 return user
209
209
210 def expandpath(self, loc, default=None):
210 def expandpath(self, loc, default=None):
211 """Return repository location relative to cwd or from [paths]"""
211 """Return repository location relative to cwd or from [paths]"""
212 if "://" in loc or os.path.exists(loc):
212 if "://" in loc or os.path.isdir(loc):
213 return loc
213 return loc
214
214
215 path = self.config("paths", loc)
215 path = self.config("paths", loc)
216 if not path and default is not None:
216 if not path and default is not None:
217 path = self.config("paths", default)
217 path = self.config("paths", default)
218 return path or loc
218 return path or loc
219
219
220 def setconfig_remoteopts(self, **opts):
220 def setconfig_remoteopts(self, **opts):
221 if opts.get('ssh'):
221 if opts.get('ssh'):
222 self.setconfig("ui", "ssh", opts['ssh'])
222 self.setconfig("ui", "ssh", opts['ssh'])
223 if opts.get('remotecmd'):
223 if opts.get('remotecmd'):
224 self.setconfig("ui", "remotecmd", opts['remotecmd'])
224 self.setconfig("ui", "remotecmd", opts['remotecmd'])
225
225
226 def write(self, *args):
226 def write(self, *args):
227 if self.header:
227 if self.header:
228 if self.header != self.prev_header:
228 if self.header != self.prev_header:
229 self.prev_header = self.header
229 self.prev_header = self.header
230 self.write(*self.header)
230 self.write(*self.header)
231 self.header = []
231 self.header = []
232 for a in args:
232 for a in args:
233 sys.stdout.write(str(a))
233 sys.stdout.write(str(a))
234
234
235 def write_header(self, *args):
235 def write_header(self, *args):
236 for a in args:
236 for a in args:
237 self.header.append(str(a))
237 self.header.append(str(a))
238
238
239 def write_err(self, *args):
239 def write_err(self, *args):
240 try:
240 try:
241 if not sys.stdout.closed: sys.stdout.flush()
241 if not sys.stdout.closed: sys.stdout.flush()
242 for a in args:
242 for a in args:
243 sys.stderr.write(str(a))
243 sys.stderr.write(str(a))
244 except IOError, inst:
244 except IOError, inst:
245 if inst.errno != errno.EPIPE:
245 if inst.errno != errno.EPIPE:
246 raise
246 raise
247
247
248 def flush(self):
248 def flush(self):
249 try: sys.stdout.flush()
249 try: sys.stdout.flush()
250 except: pass
250 except: pass
251 try: sys.stderr.flush()
251 try: sys.stderr.flush()
252 except: pass
252 except: pass
253
253
254 def readline(self):
254 def readline(self):
255 return sys.stdin.readline()[:-1]
255 return sys.stdin.readline()[:-1]
256 def prompt(self, msg, pat=None, default="y"):
256 def prompt(self, msg, pat=None, default="y"):
257 if not self.interactive: return default
257 if not self.interactive: return default
258 while 1:
258 while 1:
259 self.write(msg, " ")
259 self.write(msg, " ")
260 r = self.readline()
260 r = self.readline()
261 if not pat or re.match(pat, r):
261 if not pat or re.match(pat, r):
262 return r
262 return r
263 else:
263 else:
264 self.write(_("unrecognized response\n"))
264 self.write(_("unrecognized response\n"))
265 def getpass(self, prompt=None, default=None):
265 def getpass(self, prompt=None, default=None):
266 if not self.interactive: return default
266 if not self.interactive: return default
267 return getpass.getpass(prompt or _('password: '))
267 return getpass.getpass(prompt or _('password: '))
268 def status(self, *msg):
268 def status(self, *msg):
269 if not self.quiet: self.write(*msg)
269 if not self.quiet: self.write(*msg)
270 def warn(self, *msg):
270 def warn(self, *msg):
271 self.write_err(*msg)
271 self.write_err(*msg)
272 def note(self, *msg):
272 def note(self, *msg):
273 if self.verbose: self.write(*msg)
273 if self.verbose: self.write(*msg)
274 def debug(self, *msg):
274 def debug(self, *msg):
275 if self.debugflag: self.write(*msg)
275 if self.debugflag: self.write(*msg)
276 def edit(self, text, user):
276 def edit(self, text, user):
277 (fd, name) = tempfile.mkstemp(prefix="hg-editor-", suffix=".txt",
277 (fd, name) = tempfile.mkstemp(prefix="hg-editor-", suffix=".txt",
278 text=True)
278 text=True)
279 try:
279 try:
280 f = os.fdopen(fd, "w")
280 f = os.fdopen(fd, "w")
281 f.write(text)
281 f.write(text)
282 f.close()
282 f.close()
283
283
284 editor = (os.environ.get("HGEDITOR") or
284 editor = (os.environ.get("HGEDITOR") or
285 self.config("ui", "editor") or
285 self.config("ui", "editor") or
286 os.environ.get("EDITOR", "vi"))
286 os.environ.get("EDITOR", "vi"))
287
287
288 util.system("%s \"%s\"" % (editor, name),
288 util.system("%s \"%s\"" % (editor, name),
289 environ={'HGUSER': user},
289 environ={'HGUSER': user},
290 onerr=util.Abort, errprefix=_("edit failed"))
290 onerr=util.Abort, errprefix=_("edit failed"))
291
291
292 f = open(name)
292 f = open(name)
293 t = f.read()
293 t = f.read()
294 f.close()
294 f.close()
295 t = re.sub("(?m)^HG:.*\n", "", t)
295 t = re.sub("(?m)^HG:.*\n", "", t)
296 finally:
296 finally:
297 os.unlink(name)
297 os.unlink(name)
298
298
299 return t
299 return t
300
300
301 def sendmail(self):
301 def sendmail(self):
302 '''send mail message. object returned has one method, sendmail.
302 '''send mail message. object returned has one method, sendmail.
303 call as sendmail(sender, list-of-recipients, msg).'''
303 call as sendmail(sender, list-of-recipients, msg).'''
304
304
305 def smtp():
305 def smtp():
306 '''send mail using smtp.'''
306 '''send mail using smtp.'''
307
307
308 local_hostname = self.config('smtp', 'local_hostname')
308 local_hostname = self.config('smtp', 'local_hostname')
309 s = smtplib.SMTP(local_hostname=local_hostname)
309 s = smtplib.SMTP(local_hostname=local_hostname)
310 mailhost = self.config('smtp', 'host')
310 mailhost = self.config('smtp', 'host')
311 if not mailhost:
311 if not mailhost:
312 raise util.Abort(_('no [smtp]host in hgrc - cannot send mail'))
312 raise util.Abort(_('no [smtp]host in hgrc - cannot send mail'))
313 mailport = int(self.config('smtp', 'port', 25))
313 mailport = int(self.config('smtp', 'port', 25))
314 self.note(_('sending mail: smtp host %s, port %s\n') %
314 self.note(_('sending mail: smtp host %s, port %s\n') %
315 (mailhost, mailport))
315 (mailhost, mailport))
316 s.connect(host=mailhost, port=mailport)
316 s.connect(host=mailhost, port=mailport)
317 if self.configbool('smtp', 'tls'):
317 if self.configbool('smtp', 'tls'):
318 self.note(_('(using tls)\n'))
318 self.note(_('(using tls)\n'))
319 s.ehlo()
319 s.ehlo()
320 s.starttls()
320 s.starttls()
321 s.ehlo()
321 s.ehlo()
322 username = self.config('smtp', 'username')
322 username = self.config('smtp', 'username')
323 password = self.config('smtp', 'password')
323 password = self.config('smtp', 'password')
324 if username and password:
324 if username and password:
325 self.note(_('(authenticating to mail server as %s)\n') %
325 self.note(_('(authenticating to mail server as %s)\n') %
326 (username))
326 (username))
327 s.login(username, password)
327 s.login(username, password)
328 return s
328 return s
329
329
330 class sendmail(object):
330 class sendmail(object):
331 '''send mail using sendmail.'''
331 '''send mail using sendmail.'''
332
332
333 def __init__(self, ui, program):
333 def __init__(self, ui, program):
334 self.ui = ui
334 self.ui = ui
335 self.program = program
335 self.program = program
336
336
337 def sendmail(self, sender, recipients, msg):
337 def sendmail(self, sender, recipients, msg):
338 cmdline = '%s -f %s %s' % (
338 cmdline = '%s -f %s %s' % (
339 self.program, templater.email(sender),
339 self.program, templater.email(sender),
340 ' '.join(map(templater.email, recipients)))
340 ' '.join(map(templater.email, recipients)))
341 self.ui.note(_('sending mail: %s\n') % cmdline)
341 self.ui.note(_('sending mail: %s\n') % cmdline)
342 fp = os.popen(cmdline, 'w')
342 fp = os.popen(cmdline, 'w')
343 fp.write(msg)
343 fp.write(msg)
344 ret = fp.close()
344 ret = fp.close()
345 if ret:
345 if ret:
346 raise util.Abort('%s %s' % (
346 raise util.Abort('%s %s' % (
347 os.path.basename(self.program.split(None, 1)[0]),
347 os.path.basename(self.program.split(None, 1)[0]),
348 util.explain_exit(ret)[0]))
348 util.explain_exit(ret)[0]))
349
349
350 method = self.config('email', 'method', 'smtp')
350 method = self.config('email', 'method', 'smtp')
351 if method == 'smtp':
351 if method == 'smtp':
352 mail = smtp()
352 mail = smtp()
353 else:
353 else:
354 mail = sendmail(self, method)
354 mail = sendmail(self, method)
355 return mail
355 return mail
356
356
357 def print_exc(self):
357 def print_exc(self):
358 '''print exception traceback if traceback printing enabled.
358 '''print exception traceback if traceback printing enabled.
359 only to call in exception handler. returns true if traceback
359 only to call in exception handler. returns true if traceback
360 printed.'''
360 printed.'''
361 if self.traceback:
361 if self.traceback:
362 traceback.print_exc()
362 traceback.print_exc()
363 return self.traceback
363 return self.traceback
@@ -1,31 +1,51 b''
1 # basic operation
1 # basic operation
2 adding a
2 adding a
3 reverting a
3 reverting a
4 changeset 2:b38a34ddfd9f backs out changeset 1:a820f4f40a57
4 changeset 2:b38a34ddfd9f backs out changeset 1:a820f4f40a57
5 a
5 a
6 # file that was removed is recreated
6 # file that was removed is recreated
7 adding a
7 adding a
8 adding a
8 adding a
9 changeset 2:44cd84c7349a backs out changeset 1:76862dcce372
9 changeset 2:44cd84c7349a backs out changeset 1:76862dcce372
10 content
10 content
11 # backout of backout is as if nothing happened
11 # backout of backout is as if nothing happened
12 removing a
12 removing a
13 changeset 3:0dd8a0ed5e99 backs out changeset 2:44cd84c7349a
13 changeset 3:0dd8a0ed5e99 backs out changeset 2:44cd84c7349a
14 cat: a: No such file or directory
14 cat: a: No such file or directory
15 # backout with merge
15 # backout with merge
16 adding a
16 adding a
17 reverting a
17 reverting a
18 changeset 3:6c77ecc28460 backs out changeset 1:314f55b1bf23
18 changeset 3:6c77ecc28460 backs out changeset 1:314f55b1bf23
19 merging with changeset 2:b66ea5b77abb
19 merging with changeset 2:b66ea5b77abb
20 merging a
20 merging a
21 0 files updated, 1 files merged, 0 files removed, 0 files unresolved
21 0 files updated, 1 files merged, 0 files removed, 0 files unresolved
22 (branch merge, don't forget to commit)
22 (branch merge, don't forget to commit)
23 line 1
23 line 1
24 # backout should not back out subsequent changesets
24 # backout should not back out subsequent changesets
25 adding a
25 adding a
26 adding b
26 adding b
27 reverting a
27 reverting a
28 changeset 3:4cbb1e70196a backs out changeset 1:22bca4c721e5
28 changeset 3:4cbb1e70196a backs out changeset 1:22bca4c721e5
29 the backout changeset is a new head - do not forget to merge
29 the backout changeset is a new head - do not forget to merge
30 (use "backout -m" if you want to auto-merge)
30 (use "backout -m" if you want to auto-merge)
31 b: No such file or directory
31 b: No such file or directory
32 adding a
33 adding b
34 adding c
35 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
36 adding d
37 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
38 (branch merge, don't forget to commit)
39 # backout of merge should fail
40 abort: cannot back out a merge changeset without --parent
41 # backout of merge with bad parent should fail
42 abort: cb9a9f314b8b is not a parent of b2f3bb92043e
43 # backout of non-merge with parent should fail
44 abort: cannot use --parent on non-merge changeset
45 # backout with valid parent should be ok
46 removing d
47 changeset 5:11fbd9be634c backs out changeset 4:b2f3bb92043e
48 rolling back last transaction
49 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
50 removing c
51 changeset 5:1a5f1a63bf2c backs out changeset 4:b2f3bb92043e
@@ -1,25 +1,25 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 mkdir test
3 hg init test
4 cd test
4 cd test
5 echo foo>foo
5 echo foo>foo
6 hg init
6 hg commit -A -d '0 0' -m 1
7 hg addremove
7 hg --config server.uncompressed=True serve -p 20059 -d --pid-file=hg1.pid
8 hg commit -m 1
8 cat hg1.pid >> $DAEMON_PIDS
9 hg verify
9 hg serve -p 20060 -d --pid-file=hg2.pid
10 hg serve -p 20059 -d --pid-file=hg.pid
10 cat hg2.pid >> $DAEMON_PIDS
11 cat hg.pid >> $DAEMON_PIDS
12 cd ..
11 cd ..
13
12
14 echo % clone via stream
13 echo % clone via stream
15 http_proxy= hg clone --stream http://localhost:20059/ copy 2>&1 | \
14 http_proxy= hg clone --uncompressed http://localhost:20059/ copy 2>&1 | \
16 sed -e 's/[0-9][0-9.]*/XXX/g'
15 sed -e 's/[0-9][0-9.]*/XXX/g'
17 cd copy
16 cd copy
18 hg verify
17 hg verify
19
18
20 cd ..
19 echo % try to clone via stream, should use pull instead
20 http_proxy= hg clone --uncompressed http://localhost:20060/ copy2
21
21
22 echo % clone via pull
22 echo % clone via pull
23 http_proxy= hg clone http://localhost:20059/ copy-pull
23 http_proxy= hg clone http://localhost:20059/ copy-pull
24 cd copy-pull
24 cd copy-pull
25 hg verify
25 hg verify
@@ -1,41 +1,41 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 hg init a
3 hg init a
4 cd a
4 cd a
5 echo a > a
5 echo a > a
6 hg ci -Ama -d '1123456789 0'
6 hg ci -Ama -d '1123456789 0'
7 hg serve -p 20059 -d --pid-file=hg.pid
7 hg --config server.uncompressed=True serve -p 20059 -d --pid-file=hg.pid
8 cat hg.pid >> $DAEMON_PIDS
8 cat hg.pid >> $DAEMON_PIDS
9
9
10 cd ..
10 cd ..
11 ("$TESTDIR/tinyproxy.py" 20060 localhost >proxy.log 2>&1 </dev/null &
11 ("$TESTDIR/tinyproxy.py" 20060 localhost >proxy.log 2>&1 </dev/null &
12 echo $! > proxy.pid)
12 echo $! > proxy.pid)
13 cat proxy.pid >> $DAEMON_PIDS
13 cat proxy.pid >> $DAEMON_PIDS
14 sleep 2
14 sleep 2
15
15
16 echo %% url for proxy, stream
16 echo %% url for proxy, stream
17 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone --stream http://localhost:20059/ b | \
17 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone --uncompressed http://localhost:20059/ b | \
18 sed -e 's/[0-9][0-9.]*/XXX/g'
18 sed -e 's/[0-9][0-9.]*/XXX/g'
19 cd b
19 cd b
20 hg verify
20 hg verify
21 cd ..
21 cd ..
22
22
23 echo %% url for proxy, pull
23 echo %% url for proxy, pull
24 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone http://localhost:20059/ b-pull
24 http_proxy=http://localhost:20060/ hg --config http_proxy.always=True clone http://localhost:20059/ b-pull
25 cd b-pull
25 cd b-pull
26 hg verify
26 hg verify
27 cd ..
27 cd ..
28
28
29 echo %% host:port for proxy
29 echo %% host:port for proxy
30 http_proxy=localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ c
30 http_proxy=localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ c
31
31
32 echo %% proxy url with user name and password
32 echo %% proxy url with user name and password
33 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ d
33 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://localhost:20059/ d
34
34
35 echo %% url with user name and password
35 echo %% url with user name and password
36 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://user:passwd@localhost:20059/ e
36 http_proxy=http://user:passwd@localhost:20060 hg clone --config http_proxy.always=True http://user:passwd@localhost:20059/ e
37
37
38 echo %% bad host:port for proxy
38 echo %% bad host:port for proxy
39 http_proxy=localhost:20061 hg clone --config http_proxy.always=True http://localhost:20059/ f
39 http_proxy=localhost:20061 hg clone --config http_proxy.always=True http://localhost:20059/ f
40
40
41 exit 0
41 exit 0
@@ -1,29 +1,30 b''
1 (the addremove command is deprecated; use add and remove --after instead)
2 adding foo
1 adding foo
3 checking changesets
4 checking manifests
5 crosschecking files in changesets and manifests
6 checking files
7 1 files, 1 changesets, 1 total revisions
8 % clone via stream
2 % clone via stream
9 streaming all changes
3 streaming all changes
10 XXX files to transfer, XXX bytes of data
4 XXX files to transfer, XXX bytes of data
11 transferred XXX bytes in XXX seconds (XXX KB/sec)
5 transferred XXX bytes in XXX seconds (XXX KB/sec)
12 XXX files updated, XXX files merged, XXX files removed, XXX files unresolved
6 XXX files updated, XXX files merged, XXX files removed, XXX files unresolved
13 checking changesets
7 checking changesets
14 checking manifests
8 checking manifests
15 crosschecking files in changesets and manifests
9 crosschecking files in changesets and manifests
16 checking files
10 checking files
17 1 files, 1 changesets, 1 total revisions
11 1 files, 1 changesets, 1 total revisions
12 % try to clone via stream, should use pull instead
13 requesting all changes
14 adding changesets
15 adding manifests
16 adding file changes
17 added 1 changesets with 1 changes to 1 files
18 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
18 % clone via pull
19 % clone via pull
19 requesting all changes
20 requesting all changes
20 adding changesets
21 adding changesets
21 adding manifests
22 adding manifests
22 adding file changes
23 adding file changes
23 added 1 changesets with 1 changes to 1 files
24 added 1 changesets with 1 changes to 1 files
24 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
25 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
25 checking changesets
26 checking changesets
26 checking manifests
27 checking manifests
27 crosschecking files in changesets and manifests
28 crosschecking files in changesets and manifests
28 checking files
29 checking files
29 1 files, 1 changesets, 1 total revisions
30 1 files, 1 changesets, 1 total revisions
@@ -1,90 +1,92 b''
1 #!/bin/sh
1 #!/bin/sh
2
2
3 # This test tries to exercise the ssh functionality with a dummy script
3 # This test tries to exercise the ssh functionality with a dummy script
4
4
5 cat <<'EOF' > dummyssh
5 cat <<'EOF' > dummyssh
6 #!/bin/sh
6 #!/bin/sh
7 # this attempts to deal with relative pathnames
7 # this attempts to deal with relative pathnames
8 cd `dirname $0`
8 cd `dirname $0`
9
9
10 # check for proper args
10 # check for proper args
11 if [ $1 != "user@dummy" ] ; then
11 if [ $1 != "user@dummy" ] ; then
12 exit -1
12 exit -1
13 fi
13 fi
14
14
15 # check that we're in the right directory
15 # check that we're in the right directory
16 if [ ! -x dummyssh ] ; then
16 if [ ! -x dummyssh ] ; then
17 exit -1
17 exit -1
18 fi
18 fi
19
19
20 echo Got arguments 1:$1 2:$2 3:$3 4:$4 5:$5 >> dummylog
20 echo Got arguments 1:$1 2:$2 3:$3 4:$4 5:$5 >> dummylog
21 $2
21 $2
22 EOF
22 EOF
23 chmod +x dummyssh
23 chmod +x dummyssh
24
24
25 echo "# creating 'remote'"
25 echo "# creating 'remote'"
26 hg init remote
26 hg init remote
27 cd remote
27 cd remote
28 echo this > foo
28 echo this > foo
29 hg ci -A -m "init" -d "1000000 0" foo
29 hg ci -A -m "init" -d "1000000 0" foo
30 echo '[server]' > .hg/hgrc
31 echo 'uncompressed = True' >> .hg/hgrc
30
32
31 cd ..
33 cd ..
32
34
33 echo "# clone remote via stream"
35 echo "# clone remote via stream"
34 hg clone -e ./dummyssh --stream ssh://user@dummy/remote local-stream 2>&1 | \
36 hg clone -e ./dummyssh --uncompressed ssh://user@dummy/remote local-stream 2>&1 | \
35 sed -e 's/[0-9][0-9.]*/XXX/g'
37 sed -e 's/[0-9][0-9.]*/XXX/g'
36 cd local-stream
38 cd local-stream
37 hg verify
39 hg verify
38 cd ..
40 cd ..
39
41
40 echo "# clone remote via pull"
42 echo "# clone remote via pull"
41 hg clone -e ./dummyssh ssh://user@dummy/remote local
43 hg clone -e ./dummyssh ssh://user@dummy/remote local
42
44
43 echo "# verify"
45 echo "# verify"
44 cd local
46 cd local
45 hg verify
47 hg verify
46
48
47 echo "# empty default pull"
49 echo "# empty default pull"
48 hg paths
50 hg paths
49 hg pull -e ../dummyssh
51 hg pull -e ../dummyssh
50
52
51 echo "# local change"
53 echo "# local change"
52 echo bleah > foo
54 echo bleah > foo
53 hg ci -m "add" -d "1000000 0"
55 hg ci -m "add" -d "1000000 0"
54
56
55 echo "# updating rc"
57 echo "# updating rc"
56 echo "default-push = ssh://user@dummy/remote" >> .hg/hgrc
58 echo "default-push = ssh://user@dummy/remote" >> .hg/hgrc
57 echo "[ui]" >> .hg/hgrc
59 echo "[ui]" >> .hg/hgrc
58 echo "ssh = ../dummyssh" >> .hg/hgrc
60 echo "ssh = ../dummyssh" >> .hg/hgrc
59
61
60 echo "# find outgoing"
62 echo "# find outgoing"
61 hg out ssh://user@dummy/remote
63 hg out ssh://user@dummy/remote
62
64
63 echo "# find incoming on the remote side"
65 echo "# find incoming on the remote side"
64 hg incoming -R ../remote -e ../dummyssh ssh://user@dummy/local
66 hg incoming -R ../remote -e ../dummyssh ssh://user@dummy/local
65
67
66 echo "# push"
68 echo "# push"
67 hg push
69 hg push
68
70
69 cd ../remote
71 cd ../remote
70
72
71 echo "# check remote tip"
73 echo "# check remote tip"
72 hg tip
74 hg tip
73 hg verify
75 hg verify
74 hg cat foo
76 hg cat foo
75
77
76 echo z > z
78 echo z > z
77 hg ci -A -m z -d '1000001 0' z
79 hg ci -A -m z -d '1000001 0' z
78
80
79 cd ../local
81 cd ../local
80 echo r > r
82 echo r > r
81 hg ci -A -m z -d '1000002 0' r
83 hg ci -A -m z -d '1000002 0' r
82
84
83 echo "# push should fail"
85 echo "# push should fail"
84 hg push
86 hg push
85
87
86 echo "# push should succeed"
88 echo "# push should succeed"
87 hg push -f
89 hg push -f
88
90
89 cd ..
91 cd ..
90 cat dummylog
92 cat dummylog
General Comments 0
You need to be logged in to leave comments. Login now